Should we beware the tyranny of the randomized controlled trial?

Tara Haelle

About Tara Haelle

Tara Haelle (@TaraHaelle) is AHCJ's medical studies core topic leader, guiding journalists through the jargon-filled shorthand of science and research and enabling them to translate the evidence into accurate information.

Photo: Herald Post via Flickr

Photo: Herald Post via Flickr

The intersection of scientific research, evidence and expertise can be a dicey one, particularly in an age in which evidence-based medicine is replacing the clinical expertise of practitioners.

In The New York Times Sunday Review, Jamie Holmes wrote about how the challenge of assessing the quality of evidence against expertise and less stringently conducted research can lead readers to confusion and frustration. It can lead to a further distrust of science, Holmes suggested, noting the example of dental flossing in the wake of an Associated Press story that questioned the evidence in favor of the practice.

“In the case of flossing’s benefits, the supposedly weak evidence cited by the Associated Press was the absence of support in the form of definitive randomized controlled trials, the so-called gold standard for scientific research,” he writes. So little RCT evidence exists “because the kind of long-term randomized controlled trial needed to evaluate flossing properly is hardly, if ever, conducted — because such studies are hard to implement.”

Indeed, it is unlikely to be deemed ethical to randomly assign a group of people not to floss their teeth for a sufficiently long enough time to assess the benefits of the practice, Holmes writes, because “it’s considered unethical to run randomized controlled trials without genuine uncertainty among experts regarding what works.”

In other words, if dentists and other experts are pretty much on the same page regarding the value and benefits of flossing, requiring RCTs to prove as much would resemble a milder version of the parodical study design of RCTs for parachute use. This rationale is the same reason that it would be unethical to conduct a large-scale, long-term randomized controlled trial comparing vaccination according to the CDC childhood vaccine schedule with non-vaccination.

In the case of vaccines, though, individual RCTs for each vaccine, and for combinations of vaccines do support the CDC schedule when brought together. The circumstances of flossing are more complicated, leading to confusion: “A lot of people now mistakenly think that ‘science’ doesn’t support flossing.”

Dentists know otherwise, Holmes said, based not only on clinical experience but also on other research that does not include RCTs. “Yet the notion has taken hold that such expertise is fatally subjective and that only randomized controlled trials provide real knowledge,” Holmes writes. This hyper-emphasis on the RCT as higher on the hierarchy of evidence than expert opinion, which is close to worthless in evidence-based medicine, has become overly simplistic, Holmes argues.

Any extreme is unhelpful and problematic, and extremes in evidence-based medicine are no different. Yet it is easy as a journalist to get caught up in strictly adhering to “rules” in evidence-based medicine, such as something not being “true” unless supported by adequate RCTs. In the process, journalists might become a bit like Javert in “Les Miserables,” so blinded by the “letter of the law” that they miss the substance and nuance in picking apart evidence, recommendations, clinical experience and other aspects of assessing the benefits and harms of various interventions, behaviors and practices.

Holmes paraphrases doctor Mark Tonelli in stating that “what a patient prefers on the basis of personal experience; what a doctor thinks on the basis of clinical experience; and what clinical research has discovered” are each valuable in their own way. Holmes describes clinical expertise and systematic evaluation as partners rather than rivals, especially considering that the former is more helpful in discovering and explaining things than RCTs are.

What I found most salient in Holmes’ piece was the implication that extreme reliance on RCTs to explain and defend — or debunk — pretty much anything and everything has contributed to an erosion of trust in experts and expertise. It leads to the kind of back-and-forth citation wars found in social media by so-called victims of the Dunning-Kruger effect. The dangers of the “cult of randomized controlled trials” are seen in everything from anti-vaccination proponents distrusting doctors to the overall distrust of science seen in large political groups to the rejection of scientific realities by President-elect Donald Trump.

It also risks squandering valuable resources, Holmes suggests. “Antagonism toward expertise can also waste time and effort by spurring researchers to test the efficacy of things we already know work.” he said, calling for “a more nuanced view of expertise” to accompany the demand for evidence. This point is valuable for reporters to keep in mind when reporting on any medicine, health or science news.

One thought on “Should we beware the tyranny of the randomized controlled trial?

  1. Merrill Goozner

    The piece in the Times and this piece are far removed from what really ought to be of concern to medical science reporters — the fact there is limited outcomes evidence for a growing number of new drugs and devices being approved by the FDA under various accelerated approval programs. This problem will grow worse in the years ahead due to the sanctification of this approach in the 21st Century Cures Act.

    The hierarchy for evaluating evidence — from expert opinion on the low end to RCTs on the high end — is well described in the medical literature. That hierarchy is used by every group that evaluates evidence to come up with recommendations for clinical practice (the USPSTF, clinical practice guideline writing committees at specialty societies, the Cochrane Group committees, etc.).

    We have a far bigger problem in the U.S. with doctors who do not practice in accordance with guidelines whose development was based on excellent clinical evidence, but instead rely on their personal expert opinion.

Leave a Reply

Your email address will not be published. Required fields are marked *