Using editorials, letters to help make sense of contradictory data

The Centers for Disease Control and Prevention released a bombshell alcohol recommendation to women on Feb. 2 that led to an explosion of responses. I was among those who commented on the fray, and I primarily addressed how the evidence itself about alcohol and pregnancy was obscured by the resulting backlash.

I also mentioned that I had previously interpreted the evidence differently over several years of covering periodic studies about light drinking and pregnancy. I didn’t go into a great deal of detail, however, on how I made that switch, and I thought that process might be instructive for other health journalists covering such controversial issues in which the science can be confusing. Writing about risk, in particular, can be incredibly thorny.

Given how fraught an issue drinking in pregnancy is, I found help where I wouldn’t ordinarily have looked had I not been diving into the deep end for book research: editorials, commentaries and letters to the editor in medical journals. Since I’d covered this research for past articles, I already had a sizable stack of recent studies on light drinking in pregnancy, and nearly all of them did not show clinically significant effects from a couple glasses a week, sometimes even up to a glass each day, of alcohol in the second and third trimesters. (Most studies agree that drinking of any kind in the first trimester is pretty risky.)

I used the references sections on those studies along with PubMed searches on their authors and the “related links” sidebar on PubMed articles to identify other relevant studies.

This is where the unexpected aid arrived. The “related links” and author searches led me to a number of editorials, letters to the editor and similar commentaries about the studies I already had. In most cases, the letters were published a month or two after the original study, making it unlikely that I would have seen them when I first saw the study or even reported on it.

Most of these were brief, but they were a gold mine of thoughtful criticisms and additional references I would never have thought of or found on my own. All studies, of course, include a section on the article’s strengths and limitations, but these are written by the authors and generally tend to play up the study’s strengths and play down the study’s flaws. It’s often hard to know which of these “limitations” are padding to hit the requirement of including them in the first place and which are legitimate weaknesses that might affect how to interpret the study’s findings or how much weight to give them.

The editorials and, particularly, the letters I read offered more meaningful critiques about methodology, endpoint assessments, statistical choices, population and covariate selection, confounders, biases and other aspects of the studies that put them into better context. Each of these letters and editorials provided references I could then look up myself to evaluate their criticisms.

These references led me to basic science studies, critiques of developmental measures and assessment tools, fascinating case studies about twins and other scientific articles I wouldn’t have found on my own — or even known to look for. I also read letters and commentaries that supported the studies I originally had and offered persuasive reasons why it might be reasonable to conclude periodic light drinking was not harmful in pregnancy.

The combination of all this auxiliary reading provided the big picture view of the forest I had been unable to see when only inspecting individual trees. I’ve written before about the importance of context in covering medical research, but most reporters think of that as being familiar with the existing evidence base in terms of previous studies.

Expanding that to include letters, commentaries and editorials — admittedly when time allows — may give reporters a deeper, richer and more nuanced understanding of an especially complex issue.

Leave a Reply