
Ryan McNeill
By Ryan McNeill
For two years, a team of reporters at The Dallas Morning News has written stories of breakdowns in patient care at one of Texas’ most important medical institutions, Parkland Memorial Hospital.
Parkland is where President John F. Kennedy was taken after he was shot in Dallas in 1963. More important to everyday life in North Texas, Parkland is, for many, the hospital of last resort, where the poor and uninsured turn for medical care.
For years, Parkland officials have claimed it is one of America’s top public hospitals. But in story after story, News reporters showed how systemic breakdowns in care left patients maimed or dead. One story, for example, told of Jessie Mae Ned, a longtime Parkland employee who was left a destitute amputee after a doctor trainee botched routine knee surgery.
Officials from Parkland and its academic partner, the University of Texas Southwestern Medical Center, were highly critical of the News’ coverage. For example, Daniel K. Podolsky, president of UT Southwestern, called the account of Ned’s surgical ordeal “an anecdotal approach … to writing about issues of quality of care by focusing on highly unfortunate, but also highly unrepresentative, outcomes.”
These same officials insisted that their data showed a history of outstanding performance at Parkland. However, despite repeated requests, both institutions refused to provide this data to The News.
What little benchmarking data we did obtain suggested there were problems. We were confident in our stories, but quantitative evidence would help reinforce and broaden the coverage.
We soon discovered that assembling that evidence wouldn’t be so easy.
Most health reporters are probably familiar with Hospital Compare, a website run by the federal government that has a wealth of data on thousands of hospitals nationally. However, we found two problems with using this data.
First, the set of data that would allow us to measure Parkland’s performance only covered a hospital’s performance on Medicare patients. For some hospitals, including Parkland, Medicare patients make up a relatively small percentage of those treated.
Second, and arguably more important, the data available at the time did not address the issue we were trying to examine at hospitals: understanding just how often patients suffer potentially preventable medical complications.
Click to see the full chart. (Courtesy of The Dallas Morning News)
In early 2011, we began examining a dataset called the Texas Inpatient Public Use Data File (PUDF). The PUDF contains a summary of diagnoses, procedures, demographic information, severity of illness and outcomes about patients from virtually every hospital in Texas. (Other states have similar datasets, typically called discharge or administrative data.)
Then we discovered that the Agency for Healthcare Research and Quality, a part of the U.S. Department of Health and Human Services, had developed software to examine potentially preventable medical complications.
Known as the Patient Safety Indicators, the software appeared to be everything we needed. First, it was designed specifically to work with the data similar to Texas’ PUDF. Second, it had been used and tested by some of the nation’s leading researchers. Finally, and perhaps most importantly, the software is an impartial observer.
AHRQ offers two versions of the software for free. One is a Windows-based version that anyone can use. Another uses SAS, a very expensive statistical analysis package employed by only a few newspapers in America.
Because of the size of the data we would be examining – nearly 9 million records in all – we made a pitch to newsroom management for SAS. They agreed.
Once we decided on the version of the software, we needed to obtain more recent PUDF data –another huge financial commitment. Each year’s worth of data can cost about $2,500. We bought data from 2009, the most recent full year available.
Our first analysis showed poor results for Parkland.
The software produces a so-called composite score, which is a weighted average built from eight of the 17 indicators. It gives a broad look across multiple indicators.
Parkland was among the state’s worst large hospitals, those with 200 or more beds. Another finding: The largest of two hospitals owned by UT Southwestern, known as University Hospital-St. Paul, also performed among the worst hospitals on the composite in 2009.
UT Southwestern provides the resident physicians at Parkland. And, at the time, UT Southwestern had launched a major marketing campaign based on U.S. News & World Report naming it the best hospital in Dallas.
We needed to make sure our results weren’t a one-year blip. So we purchased another year of data from 2008. We already had 2007 data from a previous project. This gave us results from 2007, 2008 and 2009. We found that the performance by Parkland and UT Southwestern wasn’t a one-year blip.
Parkland was the worst-performing hospital on the composite in 2007, third-worst in 2008 and fourth-worst in 2009. For UT Southwestern-St. Paul, it was fourth-worst in 2008 and third-worst out of 105 large hospitals in 2009.
Just weeks before we published our analysis, Parkland was threatened with shutdown by the federal government. Spurred by our earlier reporting, the U.S. Centers for Medicare & Medicaid Services had conducted a sweeping inspection of the hospital and found that patients had been harmed or put in “immediate jeopardy.” Parkland averted closure only by agreeing to a rare form of federal oversight – the largest and only the fifth hospital in the United States so sanctioned.
Our analysis allowed us to tell readers that patient safety issues had existed for years at Parkland as well as UTSW-St. Paul. Both ranked poorly on many of the same indicators, such as accidental punctures and lacerations during procedures.
In fact, a number of large Dallas-area hospitals also ranked among the 10 worst in Texas, including the metro area’s other major public hospital, John Peter Smith in Fort Worth.
A few tips
By far, the biggest key to the success of this project was enlisting top experts in the field as advisers. Our work would not have been possible without the guidance of people such as University of California-Davis professor Dr. Patrick Romano, the clinical lead on the AHRQ quality indicators project, as well as Kathryn McDonald of Stanford University, the project’s lead researcher.
Developing a relationship with these leading experts allowed us to avoid errors and understand in real time whether issues raised by critics were legitimate.
Another key was becoming as knowledgeable as possible about the software. Take the time to read the documentation. Read it more than once.
This was beneficial in several ways. It allowed us to go to the experts with relatively informed questions and show them we were serious about doing things right. And our enhanced knowledge of the Patient Safety Indicators allowed us to address even our most experienced critics.
Before publication, we provided all large hospitals in the Dallas-Fort Worth area with a report card. We asked them to comment on or dispute the findings.
Initially, the DFW Hospital Council blasted our work as lacking credibility. But after we pointed out that it provides member hospitals with the very same analysis, the council’s president said he wasn’t challenging the accuracy of our analysis. His major opposition was based on identifying individual hospitals.
In the end, it was UTSW’s Podolsky who voiced the strongest opposition to our work after publication. He wrote that our stories “indicate that they have neither put aside their anecdotal approach nor have they been willing to use quality data appropriately to present accurate assessments of Parkland or, in our case, of University Hospital-St. Paul.”
News managing editor George Rodrigue answered in a point-by-point response published on the newspaper’s website, DallasNews.com.
“We believe that our story was a fair and thorough exploration of hospital performance according to the most comprehensive, publicly available data on hospital patients, using a statistical tool that was developed by the federal government and endorsed and used by the hospital industry,” he wrote.
Rodrigue noted that Podolsky had dismissed our previous work as anecdotal and demanded benchmarking, even as federal inspectors cited evidence of systemic treatment failures at Parkland. “When we perform the statistical comparison that UTSW says it desires, using the only patient-safety data available to us, it (UTSW) argues that the results are meaningless,” Rodrigue wrote.
We plan to continue exploring patient safety and other care issues for the foreseeable future. Researchers have produced three other software packages on AHRQ that examine volume of care, pediatric care and other areas that will be useful in helping our readers make decisions about where to seek medical treatment.
Ryan McNeill is a staff writer at The Dallas Morning News.





