How to think about impact factors: A journalist’s perspective

This is the second of two blog posts providing different perspectives on the value of considering a journal’s impact factor when considering whether to report on a study published in it. Today I feature an interview with AHCJ Vice President Ivan Oransky, M.D., who is co-founder of Retraction Watch and founder of Embargo Watch. He also is vice president and global editorial director at MedPage Today.

Oransky brings a journalist’s perspective to the importance of journal impact factors in evaluating the value of a study. My earlier post featured an interview with Hilda Bastian, Ph.D. who has background in research.

Ivan Oransky

Ivan Oransky

What are the limitations or problematic aspects of using impact factors to judge the quality of a journal or to help assess whether you should cover a study published in it?
Researchers have been warning — appropriately — about the risks of overusing impact factors as long as there have been impact factors. They can be gamed, like any other metric. Even Eugene Garfield, one of the people who invented impact factors, has written that “use of journal impacts in evaluating individuals has its inherent dangers.” And it’s been shown that high impact-factor journals tend to have more retractions, although the evidence suggests most, if not all of that, is due to the fact that there is more scrutiny of those journals.

In what ways can an impact factor be a helpful metric for journalists and editors?
I think of impact factors, when used in a limited way, as one among many criteria by which to judge a given paper. I’d stress that it should never be used alone, and that a roster of sources to bounce studies off is much better in the long term than an impact factor assessment. But if you’re trying to decide whether to cover a study that someone has just sent you a press release about, it’s not unreasonable to check how the journal’s impact factor ranks among other journals in its field.

Note that I said within its field. It’s useless (and misleading) to compare the impact factor of a journal in one field to the impact factor of a journal in another field. And this kind of comparison is probably more useful among clinical specialties than among basic science journals. But scientists definitely compete to get into higher-impact factor journals in their specialty — a practice many of us wish would end, with better incentives than “publish or perish,” but hasn’t yet. Sometimes, that might prompt researchers to cut corners, just as it might nudge a reporter to make something up to get a story into a big magazine. But most of the time, it means the work has been beaten up a bit before it’s published.

So, if something sounds interesting from a press release, and it has appeared in a journal with a reasonable impact factor, you can have a bit of confidence in it. But that’s not an excuse not to read the paper. If you read the paper and it says nothing like what the press release claimed it did, or it’s flawed in some other way — too small, improper design, etc. — drop it. A high impact factor doesn’t mean you should definitely cover the work.

As Garfield wrote of those judging research, “In an ideal world, evaluators would read each article and make personal judgments.” And that’s true of journalists, too. But whether you have time to read every single paper, instead of eliminating some before you invest the time in reading them, is another question.

Based on what I know about how newsrooms work, I’m concerned about how reporters can sift through the endless stream of press releases they get, in a way that’s based on something other than the word “breakthrough.” I’d welcome other solutions, but if we don’t think about narrowing the funnel, we’re ignoring an important part of reality.

What are some considerations in deciding what studies to cover from what journals that journalists and editors are thinking about, but that the research community may not realize?
Researchers tend to know which journals are relevant in their fields, without needing to know their impact factors. It’s the sort of thing that you learn during your training, and that is reinforced among communities in a given field. But it’s less often that they know what journals are relevant in adjoining or even distant fields, even if they read them regularly to keep up with other research.

That’s the situation a general health or science reporter, or someone new to the beat, will frequently find himself or herself in. A look at how a journal ranks within a field, used among other criteria, can offer a helping hand. But developing sources who can help you navigate the literature is even better.

Leave a Reply