Finding the latest COVID-19 studies — and covering them thoughtfully

Tara Haelle

About Tara Haelle

Tara Haelle (@TaraHaelle) is AHCJ's medical studies core topic leader, guiding journalists through the jargon-filled shorthand of science and research and enabling them to translate the evidence into accurate information.

In the early days of the coronavirus pandemic, most data came from news reports, clinical summaries and preprints. Now more and more peer-reviewed studies are coming out each day, and it’s challenging to keep up with them. Several journals have set up dedicated coronavirus sites that can help in keeping up with the research.

The Lancet’s COVID-19 Resource Centre, JAMA Network’s COVID-19 resource center and NEJM’s Coronavirus (COVID-19) page all include the newest studies, commentary and related data and information on the pandemic. Oxford University Press, which publishes The Journal of Infectious Diseases, has a similar site. The CDC and NIH mailing lists will ensure you receive studies from CDC journals, and setting up a PubMed alert can catch a lot of other studies not published in the aforementioned journals.

But in trying to keep up with the latest research about the novel coronavirus, be conscientious. Given the public’s desperation to get as much information as possible, it’s tempting to report the results of most studies as quickly as possible. In other words, right now is the easiest time to make a mistake in reporting on a poorly vetted study or not adequately conveying context and nuance in the study. I did it myself today.

I frequently cover pediatrics, so I’m used to reading studies from the journal Pediatrics. When a new study came out on COVID-19 infections in more than 2,000 Chinese children, I quickly read it and reported on it. But in my haste to report it soon after publication — there was no embargo since it was on COVID-19 — I missed a key detail: the different definitions of “severe” and “critical” cases, which were reported in the study as a combined percentage for each age group.

Even though the authors combined them, they’re very different. Severe cases primarily involved pneumonia with difficulty breathing and low blood oxygen saturation, which is generally survivable in the US. Critical cases were those involving acute respiratory distress or failure and/or multiple-organ failure. Because the study combined them a single statistic, my brain collapsed them together too.

Fortunately, Jessica M. Rivera, a microbiologist and health/science writer, told me on Twitter that a pediatrician at UCSF “pointed out that their ‘severe’ classification isn’t really considered severe to us if it’s truly just needing O2. ‘Severe’ here would be needing high flow O2 or BiPAP or being intubated. So more alarming than hopefully reality.” Rivera’s comment wasn’t a criticism of my article, but it was an important point I’d missed, and it led me to go back and review the paper, find the definitions, and then adjust my article so that I actually distinguished between “severe” and “critical” percentages throughout. It definitely brought down some inadvertent alarmism in my coverage.

(If I had sought outside comment, as I usually do, I likely would have avoided this misstep. Some of my articles at certain publications do not always use outside comment for various reasons, and this story reveals the risk of that.)

The study also included a limitation that I’d noted — the inclusion of both lab-confirmed and suspected cases — but had not given enough thought to until a different colleague, medical writer Kari Oakes, asked me on Twitter: “What do you think of the critique (noted in your write-up and elsewhere) that the increased severity among younger children might really have been more RSV, possibly influenza, since 2/3 of cases were not test-confirmed? I saw elsewhere a call for analysis of test-+ only cases.”

I acknowledged the concern and that flu vaccine effectiveness studies often use only lab-confirmed flu for that reason. But, she noted, if a study only includes lab-confirmed cases, more asymptomatic cases could be missed. Her comment led me to add a bit more explanation about the limitation in the article, and I thanked both for their input.

My original article was not incorrect, but it did not contain the nuance and context it could have and which would have helped readers better understand its implications. It’s a crazy time to be a health reporter, but that means taking the time to report on research thoughtfully is all the more important.

Leave a Reply