One bad stat can spoil the bunch – another cautionary tale

Share:

Photo: Dmitriy via Flickr

Recently I wrote about the need to check citations when covering a study that triggers mental alarm bells, such as a statistic that strains belief. That post focused on a letter in the New England Journal of Medicine that frequently had been cited as evidence that opioids aren’t very addictive.

A few weeks later, a similar issue undermined the credibility of dozens (or more) publications on a far more divisive topic — gun violence.

A substantial proportion of the public already doubts much of the science related to firearms, such as the fact that a firearm’s presence in the home increases the risk of injury, homicide and suicide of a household member without increasing protection from home intruders. So when the media — and the study the media covered — incorrectly report a staggering statistic, it erodes confidence in firearm-related science even more. It also gives some firearm advocacy groups an excuse to dismiss the entire study and all the reporting on it. And that’s pretty much what happened.

A study in Pediatrics that reported on childhood firearm injuries in the United States attracted a lot of recent press. (Full disclosure: I reported on this study myself, a fact that will become more relevant shortly.)

As Dallas Morning News editor Mike Wilson wrote in a commentary about the mistake, the Washington Post, his paper and multiple other publications reported that “4.2 percent of American kids have witnessed a shooting in the past year.” A reader emailed Wilson and asked, “Really? Does it really sound believable that one kid out of every 24 has witnessed a shooting in the last year?”

As Wilson writes, “His instincts were right. The statistic was not.” The exact sentence in the study read, “Recent evidence from the National Survey of Children’s Exposure to Violence indicates that 4.2% of children aged 0 to 17 in the United States have witnessed a shooting in the past year.” It cited a 2015 JAMA Pediatrics study. Wilson asked reporter Michael Lindenberger to investigate the statistic.

Sure enough, the statistic was inaccurately represented — in BOTH studies. And it may have been tough even for some seasoned health reporters to catch one of the errors. The JAMA Pediatrics study’s Table 5 reported that 4.0 percent of all children ages 0 to 17 had “Exposure to shooting.” A reporter who checked that study should have instantly wondered why “4.0 percent” in the JAMA Pediatrics study had become “4.2 percent” when it was cited in the Pediatrics study. That should raise a red flag.

But even then, a journalist who took that Table 5 at face value and used “4.0 percent” in a story would still be reporting an incorrect statistic. That’s because the JAMA Pediatrics study didn’t solely measure “exposure to shootings” at all, which a reporter would only discover in study’s supplementary content (which I doubt too many read, especially on deadline, especially while double checking a single statistic for a different study).

When Lindenberger contacted the JAMA Pediatrics study’s lead author, he learned the survey question (W8 on page 5 of the supplementary content) was:

“At any time in (your child’s/your) life, (was your child/ were you) in any place in real life where (he/she/you) could see or hear people being shot, bombs going off, or street riots?”

Neither that question nor any other on the survey isolated exposures to shootings from “bombs going off” or “street riots.” Further, “exposure” was defined as “see or hear,” not “witnessed” as the Pediatrics study had reported. (The six items on Table 5 above “Exposure to shooting” did use the word “witnessed,” so it’s easy to see where the sloppiness came in.)

“So the question was about much more than just shootings,” Wilson wrote. “But you never would have known from looking at the table.” And the table is where most reporters would have stopped if they had already been doing due diligence in checking that statistic.

So the researchers of both studies contributed to the confusion and inaccuracy, but publications and public trust paid the price. Wilson correctly notes that “All of this matters because scientific studies — and the way journalists report on them — can affect public opinion and ultimately public policy.”

Even worse, it ultimately led to even more misinformation. I learned about Wilson’s piece from a tweet by a Breitbart writer who had previously criticized my coverage of the story. In a Breitbart article about Wilson’s commentary, the author repeated, correctly, that the survey question “was not limited to shootings” but “included such occurrences as riots and bombings.” But then he followed with this: “Moreover, it not only covered what the children saw, but what the parents of the children saw as well.”

That’s incorrect. The survey question did include the language “your child’s/your,” “he/she/you,” etc. But if he had done his own due diligence and read the JAMA Pediatrics study’s methods, the Breibart writer would have seen that the question exclusively referred to children’s exposures. The question used the language “your child’s/your” because the wording depended on the age of the child:

“If the selected child was 10 to 17 years old, the main telephone interview was conducted with the child. Otherwise, the interview was conducted with the caregiver who was most familiar with the child’s daily routine and experiences.”

The surveyors used “your” and “you” if speaking to an older child and “your child’s” if speaking to the caregiver regarding a younger child’s experience.

So the Breitbart article got it wrong too—and then went on to dismiss the entire study because of the incorrect citation. The author wrote that I, in covering the study, “bought the CDC-UT [Pediatrics study] claims hook, line, and sinker as well.” But I never reported the “4.2 percent” statistic; I only reported the new findings, which, as Wilson notes, are not in dispute at all. That means the Breitbart author is discounting the entire Pediatrics study on the basis of one incorrect citation.

That an article on Breitbart contains incorrect and hyperbolic statements is hardly surprising, but it does matter that the incorrect “4.2 percent” stat provides enough doubt for the outlet to question the entire study, which other news consumers may also do now. Readers might wonder, if the Pediatrics researchers got that wrong, what else might they have gotten wrong. Breitbart’s initial critique of my coverage focused on how I covered the study, which is fine, not that I covered it at all. Wilson’s excellent commentary on the inaccurate stat, however, shifted their focus to criticize the fact that I (and presumably anyone else) reported the study’s findings at all. And that kind of doubt about real facts is bad for science, bad for journalism, bad for policy and bad for society.

Tara Haelle

Tara Haelle is AHCJ’s health beat leader on infectious disease and formerly led the medical studies health beat. She’s the author of “Vaccination Investigation” and “The Informed Parent.”