When to quote survey results: How to judge quality and recognize red flags

Barbara Mantel

Share:

Courtney Kennedy (Photo courtesy of the Pew Research Center)

My editor at the online magazine CQ Researcher always asks if I can include survey results in my article. I feel comfortable doing that if I can find a survey from a highly regarded group that is in the business of analyzing public opinion, such as Pew Research Center or a national news organization. Other times, deciding whether to pay attention to a particular survey can be tricky.

I turned to Courtney Kennedy, vice president of survey research and innovation at the Pew Research Center, a “nonpartisan fact tank,” for advice for reporters on how to judge the quality of surveys. The conversation has been edited for length and clarity.

What is the difference between a survey and a poll?

At the end of the day, polls and surveys are the same thing. We’re talking about interviewing a sample of the population and then making inferences based on that.

What makes a good survey?

Ten years ago, that was an easy question to answer because there were certain established, codified ways of doing things that were proven to be better than other approaches. But with surveys moving largely online and with so much variation in how that is done, it’s no longer easy for me to rattle off, ‘Here are the three things for reporters to watch for.’

Previously, surveys were mostly done by phone?

That’s right, and you could say, ‘Only report a survey that drew a random sample of the population where the sample source included all Americans.’ That could be a random sample of all the phone numbers in the United States because about 98% of Americans have a phone number or a random sample from all addresses.

What can you tell reporters now that the majority of surveys are being conducted online?

I encourage people to look at transparency. Does the poll tell you how it was conducted? One red flag and, in my mind, a disqualifier is if all that is disclosed is that the poll was done online. That’s a signal that the pollster is really trying to avoid the conversation about how they drew the sample and how it was done. The better pollsters will tell you, ‘This survey was done online. And here is where the sample came from. And here is what I did to make that sample as nationally representative as possible, and here is what I did to weight the data.’

Does the sponsor, the entity paying for the survey, matter?

You are going to get better data in the long run by looking at polls done by a neutral, nonpartisan source, whether that’s a news outlet or a nonprofit like Kaiser Family Foundation, something like that. Really pay attention to who is the sponsor and do they have a conflict of interest.

Should health reporters quote from surveys sponsored by a patient advocacy organization or an industry group?

It would be dangerous to write an entire story around a poll done by somebody with a conflict of interest. There is a difference between that and writing a story on a topic that you were going to write anyway, and you interview experts, bring other data and other facts to bear and maybe you mention that poll.

Should reporters take the time to read the underlying survey questions?

Yes. It should be compelling that those were neutral questions because if the questions were leading people towards answering a certain way, then that poll should be spiked. You shouldn’t report on it.

Now that surveys are conducted mostly online, are they still done with random sampling?

At Pew—and at other places like the Associated Press—we do surveys online, but we do them with rigorous panels that were actually recruited offline. A panel is a group of people who have agreed to take surveys on an ongoing basis. At Pew, we have a panel of about 10,000 people, and we interview them once or twice a month for years. We do it this way because it’s no longer practical to cold call 1000 Americans; people just don’t pick up the phone anymore.

Is this panel a random sample?

Yes. We draw a random national sample of home addresses because the Postal Service makes available the master list of residential addresses. We mail you a letter, it’s got a few dollars in there, and we ask them to please go online. Here’s the web address. Here’s a passcode.

But not all online surveys use random sampling.

The other way is convenience sampling. If you go to Google and do a search or if you are on social media, sometimes you’ll see an ad to take surveys. Or you might get an email from, for instance, Walgreens, and it says, ‘You’re in our Customer Service Program, please take a survey.’ So there is a whole mishmash of convenience sampling that is in this other bucket of online surveys, and it’s a combination of ads and emails, and it’s haphazard sampling.

Why is a random sample better than a convenience sample?

In a random sample, everybody in the public had a chance of being selected for the survey no matter their race, religion, income or age. And so the people who respond—it’s not going to be perfect— are going to be more representative. I’ll give you an example of the problem with online convenience sampling. There was a survey done of young adults with that methodology, and the young men looked very urban and white and educated. And eventually the survey client figured out that the reason for that was because all the young men were recruited from online gaming. And so when you’re not doing random sampling, you can get a lot of people taking the survey who have something in common that makes them not representative, like they’re all online gamers, and you can get weird biases.

You mentioned weighting the data. What is that?

All surveys need to be weighted. It takes the people who actually completed your survey, and it makes that sample more representative of the country. So when I explained that Pew recruits through the mail, it turns out that women are more likely to open the mail than men, for whatever reason. And so when we get our data in, it tends to be a little too female. So if the Census Bureau tells us that 52 percent of U.S. adults are women, when we get our data back, it might be 56 percent female. But we want our data to look like the nation. So we weight down, we make the women in the sample have a little less influence on final estimates. A poll of the general public should be weighted on things like education, age, gender, and race and ethnicity, those core demographics, at a minimum.

Should reporters look at sample size when deciding whether to quote from a survey?

Most rigorous surveys have at least 1,000 interviews if it is a national sample. And so that is the ideal minimum. If I saw a survey done with only something like 300 people, that would be a huge red flag.

It is okay for the sample size to be smaller if the pollster is trying to gauge the opinion of a subgroup, for instance cancer patients, and not all American adults?

Yes, but you would still want a minimum of 100. If you interview a random sample of 100 people from any population, specialized and not specialized, the margin of error is going to be plus or minus 10 points. For example, if your survey said 60 percent of people felt comfortable with x, it really could be as high as 70 or as low as 50 percent. That’s a huge range. And so if the margin of error is more than 10 percentage points, in general, this is not very accurate data.

Should reporters mention the margin of error when reporting on a survey?

Yes. And ideally, the article will have a link to the full description of how the study was done.

Sometimes graphics accompany the survey. Can they be manipulated?

Reporters should absolutely be cautious when looking at graphics because one way to make a small, non-meaningful difference look meaningful is to zero in on the axis, especially that Y axis. And the other thing that gets lost with graphics is the margin of error. We worry about this a lot with subgroups. For example, if we put out x percent of Black adults and y percent of Latino adults say this, you can do a chart where a three point difference looks like a big deal. But that is almost always within the margin of error, and it’s not a statistically significant difference. So in addition to keeping an eye on whether somebody’s zooming in too much on small differences, you have to really bear in mind the margin of error.

Are there resources for reporters to help them assess the quality of surveys?

The American Association for Public Opinion Research has some resources on its website. And SciLine [a free service for journalists from the American Association for the Advancement of Science] has done some education on this for journalists.

Barbara Mantel

Barbara Mantel

Barbara Mantel is AHCJ’s health beat leader for freelancing. She’s an award-winning independent journalist who has worked in television, radio, print and digital news.