One of the most challenging aspects of reporting on the pandemic has been accessing reliable, accurate data about COVID-19 and its impact on Americans. The need for trustworthy, real-time data has caused a few journalism and nonprofit groups to create repositories to pull together data from varying sources.
A Thursday morning session at Health Journalism 2022 in Austin, “The quest for COVID-19 data: Where “official sources” fell short and journalism stepped in,” focused on these efforts and provided journalists with a wealth of resources for up-to-date data related to the pandemic.
Most high-income countries have national health care systems, so data collection and collation is far more straightforward than in the federalized U.S. health care system, where a mix of private and public payers are governed by national and differing state laws. Without a national registry or centralized healthcare system, it’s been harder to track statistics on COVID cases, hospitalizations, deaths, vaccinations, and other relevant numbers.
Hence the creation of Documenting COVID-19, a public-records repository with nearly 300 record sets and more than 100 investigative stories published with different partners since March 2020. The project team includes journalism fellows and Columbia University journalism and data science researchers funded through grants from MuckRock and Columbia’s Brown Institute for Media Innovation. The Documenting COVID-19 project pulls together internal emails, memoranda and health metrics from local and state governments, especially health departments, school districts and governor’s offices to create the repository.
Photo by Evan White via pexels
Two of the most influential and esteemed medical journals — if not the top two — are the New England Journal of Medicine (NEJM) and the Journal of the American Medical Association (JAMA). JAMA is more widely circulated than any other medical journal in the world. NEJM has the highest impact factor (number used to measure the importance of a journal) of any medical journal (IF 74.7). So, the combined authorship of articles in these two journals is a reasonable yardstick for assessing the diversity of researchers represented in the most influential medical studies.
That’s exactly what a new study published in the Journal of Racial and Ethnic Health Disparities has analyzed. The findings are discouraging in light of all the lip service in the past decade about needing to improve parity and diversity in medical research. Before I go into more details about the study, here are a few key takeaways:
• Women and racial/ethnic minorities aren’t just heavily underrepresented — their representation as lead or senior authors isn’t increasing in any meaningful way in either NEJM or JAMA.
• It will take centuries for the proportion of Black and Hispanic lead and senior authors to match the population of Blacks and Hispanics in the U.S.
• As journalists, we can’t change who study authors are but we can control who we contact for outside comments. We can and should also make a point to seek out women, gender minorities and Black and Hispanic researchers and clinicians.
• We can also pay attention to the authors of the studies we review. And when presented with two equally impactful studies to cover, we can opt for the one with more diversity among the authors.
Photo by Cedric Fauntleroy from Pexels.
Patricia Stinchfield, R.N., M.S., C.P.N.P., has just broken a glass ceiling, but it’s probably not the one you’re thinking of. As the president-elect of the National Foundation for Infectious Diseases (NFID), she’s not the first woman to lead the NFID. That would be Susan J. Rehm, M.D., from 2001-2004. But Stinchfield is the first nurse or nurse practitioner to lead the organization. Except for George C. Hill, Ph.D., from 2008-2010, every past president of the NFID has been an M.D.
Stinchfield’s barrier-breaking position is the sign of another shift that has been occurring in health care that needs to happen in health journalism as well: Nurses are finally beginning to get the attention and respect they deserve for work that is very distinct from, but just as important as that of physicians.
Journalists have long relied on doctors as sources for stories, whether it’s for general service health stories, investigative stories or outside opinions during coverage of medical studies. Now a new tip sheet provides resources on how to find nurses from a wide range of organizations who can provide various perspectives in your stories.
Nurses have been underrepresented in news coverage for years, as noted in a 2018 blog post by AHCJ member Barbara Glickstein, M.P.H., M.S., R.N., and Diana J. Mason, R.N., Ph.D., co-director of the Center for Health Policy and Media Engagement at George Washington University School of Nursing.
Photo by Pavel Danilyuk via Pexels.
A new study provides a rare example of something akin to a head-to-head comparison of the quality of care delivered at hospitals run by the Veterans Administration (VA) versus those outside this federal system.
In this case, the advantage appears to go to the VA on a measure of how likely patients were to remain alive within a month of being treated with emergency care.
This study focuses on veterans aged 65 years or older who were enrolled in both the Veterans Health Administration and the Medicare program, reported David C. Chan, M.D., Ph.D., of Stanford University and co-authors in a paper published by the BMJ on Feb. 16. (This paper is available under open-access terms, making it freely accessible to the public.)
Chan and co-authors focused on cases of medical crises involving emergency ambulance rides with lights and sirens that originated from 911 dispatch calls. They used data from the VA, Medicare and Social Security Administration to track what happened to these veterans in the 30 days following these episodes. They also honed in on cases involving veterans who lived within 20 miles of at least one VA hospital and at least one other kind of hospital.
There were 9.32 deaths per 100 patients in those seen at the VA hospitals, Chan and co-authors wrote. They reported a 95% confidence interval range of 9.15 to 9.50 for this figure. (For more on understanding confidence intervals, check the glossary in AHCJ’s medical studies section.) For the veterans taken to other hospitals, Chan and co-authors estimated a rate of 11.67 deaths per 100 patients. They cited a 95% confidence interval range of 11.58 to 11.76 for this group.
These differences translate into an adjusted mortality rate after 30 days that was 20.1% lower among veterans taken to VA hospitals by ambulances than among veterans taken to other hospitals, Chan and co-authors wrote.
Amid the madness of spring to summer 2020, it was impossible to keep up with the influx of publications about COVID-19. (It still is today, but that time was particularly exhausting in terms of knowing what to pay attention to.)
Two high-profile studies in that maelstrom — one on hydroxychloroquine and one exploring COVID-19 outcomes in patients taking ACE inhibitors for heart disease — were ultimately retracted because they used likely fraudulent data from Surgisphere, a company claiming to have a huge patient database available for researchers’ use.
Many of you may recall the Surgisphere scandal, which rocked the scientific world at the time and has since been thoroughly investigated. We’ve heard ad nauseam people saying hydroxychloroquine can treat COVID despite subsequent research showing that it doesn’t. Retraction of the other study published in The New England Journal of Medicine (NEJM) was also widely publicized in the Surgisphere coverage.
A research letter published in JAMA Internal Medicine shows that citations of that retracted study have continued long after it was retracted, and more than a dozen studies have even used its data in secondary analyses. The findings carry a lesson for journalists covering medical studies that I’ve discussed before: make sure study citations are legit. While it’s not possible to check every citation in every study you write about — consider at least reviewing citations used to make key points, justify intervention or assumption or provide data being re-analyzed. Make sure none of those citations has been retracted (or it’s noted if it has) and is a peer-reviewed study (if that’s relevant to how it’s used).