Tag Archives: observational

Bias or comorbidity? Risk factors for respiratory disease aren’t always what they seem

Bias or comorbidityBy this point, anyone who’s been covering or following COVID-19 knows that several comorbidities substantially increase the risk of complications and severe disease. Among those mentioned most often are diabetes, heart disease and obesity.

We learned of the associations between those conditions and more severe disease first from clinical anecdotes, then case series, then observational studies. But observational studies can almost never show causation. (I don’t think they can ever, on their own, show causation, but I add the “almost” because nothing in science is ever absolute.) Although diabetes is linked to poorer outcomes with COVID-19, it doesn’t mean having diabetes causes poorer outcomes. Continue reading

Covering a controversial study: How to dig deep on a deadline

From the moment I saw the study — and editorial and editor’s note — among JAMA’s embargoed studies, I knew it would be a doozy. Certain topics arouse controversy simply by their existence, and water fluoridation is very high on that list.

So when I was assigned to write about the JAMA Pediatrics study (Reminder: AHCJ members get free access to the JAMA Network.)  finding a link between prenatal fluoride exposure and reduced IQ in preschoolers, two things went through my mind: One, this is going to be covered horribly by some outlets and likely create unnecessary anxiety among parents, especially pregnant women (who have enough to worry about when it comes to do’s and don’ts). Two, I need to be one of those who gets it right. Continue reading

Award-winning series can help you better understand medical studies

 Photo: Dale Gillard via Flickr

Photo: Dale Gillard via Flickr

Winners of the 2016 AAAS Kavli Science Journalism Awards included science journalist Christie Aschwanden of FiveThirtyEight, who received the Silver Award in the online category for a three-part series that every health journalist would do well to read, reread and bookmark.

We previously praised how well she described p-hacking, study biases and other important concepts in understanding research for the first story, “Science Isn’t Broken.” Her second piece, “You Can’t Trust What You Read About Nutrition,” was mentioned in a John Oliver show that we also featured. It used the absurdity of a link found in one study between eating cabbage and having an innie belly button to illustrate potential problems in observational studies about nutrition. Continue reading

Analysis shows pitfalls of observational studies

How much of a chance are you willing to take on a chance finding? That’s the question raised in a British scientific journal after a study published last year suggested that the breakfast cereal eaten by 740 pregnant moms somehow determined the gender of their babies.

As it turned out, 56 percent of the women who consumed the most calories before conception gave birth to boys, compared with 45 percent of those who consumed the least. Of 132 individual foods tracked, breakfast cereal was the most significantly linked with baby boys. Snap, crackle, pop, right?

Not so fast, points out Melinda Beck in a health column in The Wall Street Journal. She rightly notes this was an observational study and the cereal findings are symptomatic of serial conclusions that such studies somehow offer consistently reliable and insightful evidence of a trend.

“Behind the cereal squabble lies a deep divide between statisticians and epidemiologists about the nature of chance in observational studies,” Beck writes. “Statisticians say random associations are rampant in such studies, which is why so many have contradictory findings [ … and that] only strict clinical trials with a control group and a test group and one variable can truly prove a cause-and-effect association. [But] epidemiologists argue that … controlled clinical trials are costly, time-consuming and sometimes unethical.”

At issue, of course, is the extent to which such studies should be believed and reported before repeated findings offer something more conclusive. What do you think, though? Should observational studies of this sort make headlines or are they a cheap way to attract an audience?