Winners of the 2016 AAAS Kavli Science Journalism Awards included science journalist Christie Aschwanden of FiveThirtyEight, who received the Silver Award in the online category for a three-part series that every health journalist would do well to read, reread and bookmark.
We previously praised how well she described p-hacking, study biases and other important concepts in understanding research for the first story, “Science Isn’t Broken.” Her second piece, “You Can’t Trust What You Read About Nutrition,” was mentioned in a John Oliver show that we also featured. It used the absurdity of a link found in one study between eating cabbage and having an innie belly button to illustrate potential problems in observational studies about nutrition.
The wording of questions in food questionnaire surveys used in nutrition research may not account for personal biases that participants unwittingly bring with them. “Although the questionnaire was meant simply to measure our food intake, at times it felt judgmental — did we take our milk full fat, low fat or fat free?” Aschwanden explains. “I noticed that when I was offered three choices of serving sizes, my inclination was to pick the middle one, regardless of what my actual portion might be.”
Questions about servings also become problematic if researchers do not take seasonal eating patterns into account. Aschwanden writes:
“Some questions — how often do you drink coffee? — were straightforward. Others confounded us. Take tomatoes. How often do I eat those in a six-month period? In September, when my garden is overflowing with them, I eat cherry tomatoes like a child devours candy. I might also eat two or three big purple Cherokees drizzled with balsamic and olive oil per day. But I can go November until July without eating a single fresh tomato. So how do I answer the question?”
She similarly describes the difficulty in prospective studies of tracking food, the challenge of too many variables and other limitations of nutrition studies, including the fact that “We expect far too much from them,” Aschwanden writes. “We want to answer questions like, what’s healthier, butter or margarine? Can eating blueberries keep my mind sharp? Will bacon give me colon cancer? But observational studies using memory-based measures of dietary intake are tools too crude to provide answers with this level of granularity.”
Aschwanden’s final piece in the series, “Failure Is Moving Science Forward,” explored the “reproducibility crisis” in science and why some real effects may not appear in studies that attempt to reproduce them. For health journalists in particular, this story is perhaps the most important of the three. Understanding replication and reproducibility are essential to providing context in stories about the latest study. In fact, her subsection “When studies conflict, which is right?” will be helpful to journalists frustrated with covering issues where the study findings seem to flip back and forth with each successive study.
“The thing to keep in mind is that no single study provides definitive evidence,” she wrote. “The more that science can bake this idea into the way that findings are presented and discussed, the better.” The more frequently that health journalists can communicate this reality to readers where appropriate, the more informed and respectful of the scientific method readers (hopefully) will become.