How much of a chance are you willing to take on a chance finding? That’s the question raised in a British scientific journal after a study published last year suggested that the breakfast cereal eaten by 740 pregnant moms somehow determined the gender of their babies.
As it turned out, 56 percent of the women who consumed the most calories before conception gave birth to boys, compared with 45 percent of those who consumed the least. Of 132 individual foods tracked, breakfast cereal was the most significantly linked with baby boys. Snap, crackle, pop, right?
Not so fast, points out Melinda Beck in a health column in The Wall Street Journal. She rightly notes this was an observational study and the cereal findings are symptomatic of serial conclusions that such studies somehow offer consistently reliable and insightful evidence of a trend.
“Behind the cereal squabble lies a deep divide between statisticians and epidemiologists about the nature of chance in observational studies,” Beck writes. “Statisticians say random associations are rampant in such studies, which is why so many have contradictory findings [ … and that] only strict clinical trials with a control group and a test group and one variable can truly prove a cause-and-effect association. [But] epidemiologists argue that … controlled clinical trials are costly, time-consuming and sometimes unethical.”
At issue, of course, is the extent to which such studies should be believed and reported before repeated findings offer something more conclusive. What do you think, though? Should observational studies of this sort make headlines or are they a cheap way to attract an audience?