Tip sheet: What to look for in covering surveys and polls

Share:

Image by Andreas Breitling via Pixabay

As we head into campaign season, we’re going to start seeing more and more articles about candidates and issues. While health journalists may not typically report much on public polling of candidates, we do frequently report on the public’s attitudes and beliefs on health-related issues. 

That’s especially likely with the 2024 elections, given how many states have banned or are considering banning abortion and gender-affirming care.

On top of that, Robert F. Kennedy Jr., a notorious and highly influential anti-vaccine crusader, is running for president. Vaccine hesitancy has been increasing, largely due to how anti-vaccine advocates like Kennedy have exploited the zeitgeist to increase distrust about COVID-19 vaccines, so polls about vaccines are likely to be a feature of the election season. 

But with so many polls and surveys out there, how do you tell if one is reliable enough to report on? A recent article in PNAS Nexus that calls for “protecting the integrity of survey research” provides recommendations primarily aimed at survey researchers but also useful for journalists in considering the quality of a survey.

Key points

  • Reliable surveys are transparent about their methods and practices. 
  • A clear explanation of the sample size and how it was weighted if “representative sampling” was used should be available. 
  • Disclosure of the study’s potential limitations, as you would expect to see in peer-reviewed studies, should be included.

Just as medical studies always contain some uncertainty, polls and surveys are not definitive reflections of public opinion. However, as past research on the public’s perception of uncertainty has shown, the way uncertainty is framed in an article can affect whether the public trusts both the research and the reporter.

Unfortunately, there’s not enough research to offer “do’s and don’t’s” for how to frame uncertainty in news pieces to ensure an audience’s trust, and the research that has been done suggests it depends at least partly on people’s preexisting beliefs.

But we do know that being straightforward and transparent about data sources is important to audiences, and journalists should do the same when researching polls and surveys. 

“Questions about the trustworthiness of survey research can arise when surveys produce contradictory results or ones belied by other data, such as the certified vote count in an election,” the authors of the PNAS Nexus paper wrote. The authors discuss some reasons those contradictory results might occur. Regardless of the reasons, the solution is the same: include transparency about how the survey was conducted. 

Hence, a big chunk of the paper’s recommendations focus on the need for survey researchers to provide as much information as possible about how a survey was conducted. The more information survey researchers provide about how they conducted their study, the more reliable you can expect their findings to be and the more confidently you or an outside expert can assess the data’s quality. 

What to look for in assessing a survey’s methods:

  1. What is the intended population being studied? 
  2. How were respondents recruited? 
  3. What tools were used to gather the data (in-person questions, online questions, landline or cell phone call questions, etc.), and how can those tools influence the responses or the audience? For example, a landline survey is unlikely to ever capture enough young adults to be representative of that population.
  4. How were respondents sampled and weighed (if it’s considered a “representative” sample)? 
  5. What’s the margin of error? 
  6. Who funded the study? 
  7. Who conducted the study? And were they independent of the funder? 
  8. What were the dates of data collection, and over how long a period of time? 
  9. What steps were taken to ensure data quality? 
  10. Are the actual questions’ wording and their order disclosed? How questions are worded and ordered can affect people’s responses. 
  11. What was the response rate? 
  12. How might the response rate affect the quality and reliability of the data? And how might the attrition/dropout rate affect this, if the survey involves repeat surveys or multiple questions over a long period of time? 
  13. Do the survey researchers acknowledge the limitations of the data? 
  14. Do you have access to the dataset? Not having access isn’t a dealbreaker — it’s pretty uncommon — but it’s nice when you can see the raw data.

Finally, in reporting the findings, avoid the temptation to report everything but the kitchen sink. Decide which are the most important or revealing numbers and focus on those in your lead or nut graf. 

You can mention some of the other relevant or interesting findings lower in the story, but keep in mind: The more numbers you report, the less likely your audience will remember any of them or remember the key points of your story.

Tara Haelle

Tara Haelle is AHCJ’s health beat leader on infectious disease and formerly led the medical studies health beat. She’s the author of “Vaccination Investigation” and “The Informed Parent.”