Watch out for these red flags in COVID-19 vaccine trials

Tara Haelle

About Tara Haelle

Tara Haelle (@TaraHaelle) is AHCJ's medical studies core topic leader, guiding journalists through the jargon-filled shorthand of science and research and enabling them to translate the evidence into accurate information.

syringes

Photo: Jernej Furman via Flickr

In a previous blog post, I discussed what reporters look for when they dig into the data from the various COVID-19 vaccine clinical trials. That post covered the do’s, but it didn’t cover the red flags that reporters should watch for as well.

Vinay Prasad, M.D., a hematologist-oncologist and associate professor of medicine at the University of California, San Francisco, followed up his Twitter thread on what to look for with a list of common problems in vaccine clinical trials that journalists also should monitor:

  • Cherry-picking a secondary endpoint when the primary is null: If the pre-specified endpoint is null (no statistically significant effect), researchers may try to play up a secondary endpoint in their abstract and press releases.
  • Underpowered/low event rate used as an excuse for null trial: If the trial doesn’t return positive results, it might be due to an underpowered trial or too few events (especially safety events) to calculate the effect size. Note that it should have been powered adequately to begin with. Ask one of your outside sources, ideally a biostatistician. This is doubly true for safety signals. If a particular adverse event occurs a few times more in the vaccine arm than the placebo arm but not enough to calculate the odds, this may be something to investigate further.
  • Improper or dubious collection or presentation of harm data: Check with outside sources to make sure the safety analysis is as robust as it should be.
  • Multiple looks at the data without penalty: If researchers slice and dice the data too many times too many ways, they can end up with at least one “statistically significant” result that probably isn’t statistically significant, simply by virtue of having done so many calculations. This can become a form of p-hacking, intentional or unintentional, unless the researchers explicitly adjust for multiple calculations. One method of doing this is a Bonferroni correction, but that’s just one example and isn’t appropriate for all scenarios.
  • Botched long-term follow-up: It’s essential to gather long-term data on a vaccine’s safety and effectiveness, but that only can happen if researchers appropriately continue tracking participants in the original clinical trials in addition to any post-licensure trials. For example, the original cohorts enrolled in the HPV vaccine clinical trials before the vaccine was approved continue to be followed to assess effectiveness.

Leave a Reply