Hundreds of peer-reviewed papers have cited a retracted study on COVID-19

Share:

Photo by CDC via Pexels.

Amid the madness of spring to summer 2020, it was impossible to keep up with the influx of publications about COVID-19. (It still is today, but that time was particularly exhausting in terms of knowing what to pay attention to.)

Two high-profile studies in that maelstrom — one on hydroxychloroquine and one exploring COVID-19 outcomes in patients taking ACE inhibitors for heart disease — were ultimately retracted because they used likely fraudulent data from Surgisphere, a company claiming to have a huge patient database available for researchers’ use.

Many of you may recall the Surgisphere scandal, which rocked the scientific world at the time and has since been thoroughly investigated. We’ve heard ad nauseam people saying hydroxychloroquine can treat COVID despite subsequent research showing that it doesn’t. Retraction of the other study published in The New England Journal of Medicine (NEJM) was also widely publicized in the Surgisphere coverage.

research letter published in JAMA Internal Medicine shows that citations of that retracted study have continued long after it was retracted, and more than a dozen studies have even used its data in secondary analyses. The findings carry a lesson for journalists covering medical studies that I’ve discussed before: make sure study citations are legit. While it’s not possible to check every citation in every study you write about — consider at least reviewing citations used to make key points, justify intervention or assumption or provide data being re-analyzed. Make sure none of those citations has been retracted (or it’s noted if it has) and is a peer-reviewed study (if that’s relevant to how it’s used).

For the research letter, researchers used Google Scholar on two dates — April 2 and March 31, 2021 — to find papers that cited the retracted NEJM study, confirmed by reviewing those articles’ full texts. In addition to looking at those articles’ date of publication, researchers also looked at whether they noted the retraction and whether, if they were a study, they used the retracted study’s data in a secondary analysis. Finally, they looked at how many times each of those articles — the ones citing the retracted study—were cited (secondary citations).

Out of 934 articles citing the retracted study, 16% were from preprints or non-peer-reviewed journals and were excluded from this analysis. A total of 652 articles (all listed here) were verified as peer-reviewed publications citing the retracted study. Although 11% were published before the NEJM article was retracted and 35% were published within two months of the retraction, the majority (54%) were published at least three months after the NEJM article had been retracted. More than a quarter (28%) were published at least six months later. Even nearly a year later, in May 2021, there remained 21 citations of the retracted study.

Only 18% of the citations from these 652 articles noted that the study had been retracted. Most often, the authors used the citation to support a statement in their article. However, 17 articles used the retracted study’s data in a new analysis — and only two of these noted the retraction. Further, most of those articles (11 out of the 17) were published at least three months after the retraction. (Seven of them were published at least six months later.) Each of those 17 articles, as of May 2021, had a median of 19 secondary citations in other articles.

Seventeen studies used data from a retracted study in their research, most of them well after the retraction occurred and without noting it. And then each of those articles has been cited anywhere from two to 49 times in other research articles. Presumably, those papers will eventually be cited in other research, and… you get the idea. There’s a serious ripple effect here.

Why does this matter to journalists covering medical studies? I’ll bring your attention to a blog post I wrote back in 2017 about the discovery that ”a five-sentence letter published in the NEJM in 1980 was heavily and uncritically cited as evidence that addiction was rare with long-term opioid therapy.” The very anecdotal NEJM letter to the editor on the unlikelihood of addiction with opioid therapy had been cited 608 times from 1980 through 2017. And the citations doubled the same year (1996) that Oxycontin entered the market.

I encourage you to re-read that 2017 blog post because it remains startling — and depressing — to me today that so many research papers cited a brief, anecdotal letter to the editor that was at least two decades old as evidence that opioid treatment can’t be addictive. Now we see that hundreds of articles are citing a retracted study related to COVID-19 with no mention of the retraction, despite extensive media coverage of the retraction from one of the biggest research scandals of the pandemic. And the retraction was due to fraudulent data — which 17 other studies used in additional analyses.

How many less-publicized retracted studies are being regularly cited or used in data analyses —and how many of those are we journalists covering? Hopefully, if we’re using due diligence and getting perspectives from outside experts about the research we cover, an errant citation in a study we’re covering won’t matter much. Other times, it could be the source of a misbelief that could cause serious harm.

Tara Haelle

Tara Haelle is AHCJ’s health beat leader on infectious disease and formerly led the medical studies health beat. She’s the author of “Vaccination Investigation” and “The Informed Parent.”

Share:

Tags: