Alcohol study coverage lacked necessary journalistic scrutiny, context

When JAMA Psychiatry published a study about alcohol use disorder prevalence a few weeks ago, the findings predictably led to a flood of stories about an apparently rapidly growing alcoholism public health crisis in the United States.

The study claimed a nearly 50 percent increase in alcohol use disorder prevalence since a decade earlier, a staggering increase by any measure.

The headlines — “A new study says heavy drinking has become a ‘public health crisis’,” “Minorities, the elderly and women are drinking much more alcohol,” “Study finds 1 in 8 Americans struggles with alcohol abuse,” “U.S. sees dramatic rise in alcohol use, high-risk drinking,” “American women, seniors drinking more than ever,” and “Study finds increase in alcohol use disorders among Americans,” — were not sensational or hyperbole. They were measured and by and large reflected exactly what the study authors concluded. (Though some headlines, such as “One in eight American adults is an alcoholic, study says” used stigmatizing, non-person-first language with the term “alcoholic,” – but that’s a discussion for another post.) I covered the study myself for Clinical Psychiatry News.

The problem is, the study itself appears deeply flawed. Vox first called attention to the study’s flaws, and I also covered it, again at Clinical Psychiatry News. But these stories came a week after the initial wave of stories about the study, and only a handful of outlets initially covering the study have since revisited its shortcomings.

The study’s methodology limitations were not conspicuous enough to jump out at green journalists (or even most veteran health reporters, including me) in the time reporters typically allocate for a daily single-study story. (I strongly encourage reading about them at Vox and my article at Clinical Psychiatry News.)

But any journalist willing to put their skepticism front and center could have uncovered them before publication with even a little digging. (Yup, including me!) The study was embargoed for nearly a week before publication. Ostensibly, embargoes’ purpose is to give reporters time to subject such studies to the scrutiny they should receive.

In practical terms, given a typical journalist’s workload and schedule, that level of scrutiny won’t – and often can’t – always happen. However, when a study concludes with such eyebrow-raising findings as this one, journalists have a responsibility to stop for a moment and ask, “Wait — there was HOW MUCH of an increase? Really? And how did they determine that? Hmm… I need to look at those data sets again.” That’s all it would take to go back to the study, notice that its authors compared data sets that were 10 years apart – and hopefully wonder, “Did anything in that survey change during that time? Is that study the most authoritative and appropriate for estimating alcoholism prevalence?”

Those questions should have led journalists to discover substantial differences between the two compared surveys. The differences, in fact, had a very high likelihood of introducing bias into the results, especially disparities in social desirability bias. Reporters then could (and should) have discussed those limitations in their coverage. Even better, they might have looked at additional data sets, as the follow-up Vox article did, to see what other federal data showed.

Even if reporters didn’t stop to wonder about the plausibility of those dramatic increases in the study findings – or learn about the changes in the survey on their own – they should have explicitly asked outside experts about the study’s limitations (an important question reporters always should ask outside experts). It’s not that the U.S. does not have an alcohol use problem — it does — but reporters dropped the ball in providing sufficient context about the scope of the problem.

One thought on “Alcohol study coverage lacked necessary journalistic scrutiny, context

  1. Avatar photoElaine Schattner

    I agree this was a poorly-done study. But why harp on this example? (Because you “know” or think the numbers regarding alcoholism are untrue?) I often wonder why the writers at Vox (and elsewhere) don’t scrutinize – or complain about lack of scrutiny – for so many negative “studies” on breast cancer screening that get loads of attention and are so methodologically flawed it’s hard to see how those got past peer review.

Leave a Reply