One of the most important skills required of reporters who cover medical research is the ability to find and discuss the limits of the studies we cover.
To that end, a trio of professors at Cambridge University recently published a helpful comment in the journal Nature: “Twenty Tips for Interpreting Scientific Claims.” (If you don’t subscribe, you can read the full article for free here.)
Some of my favorites (in no particular order):
- Study relevance limits generalizations – a great reminder that the conditions of any study will limit how its findings can be applied in the real world.
- Bias is rife – We talk about several types of bias in the topic section, like reporting bias and healthy user effect. The article reminds us that even the color of a tablet can shade how study participants feel. Continue reading
Image by themozhi’s pixel displays via flickr.
It’s a jaw-dropper of a story. A reluctant television reporter is persuaded by her producers to have a mammogram in front of the cameras. A few weeks later, she reveals the results on air: The test she initially didn’t want found cancer.
In an essay for ABC News, her employer, Amy Robach wrote:
The doctors told me bluntly: “That mammogram just saved your life.”
If you’re a woman, this is the kind of news that sends a cold stab of fear through you. Here’s a professional in the prime of her life with no family history and, by her own estimation, very little in the way of personal risk. And she’s young — just 40 years old.
The problem with Robach’s story is that it is too scary. It seems to be a play for ratings in November, a month when television stations rely on viewership numbers to set advertising rates. Continue reading
Medical research can often seem far removed from a local health beat. All the statistics, the jargon, the complicated graphs can make it easy to forget that behind every number there’s a real person. In fact, medical studies can be great jumping off points for local stories. The key is finding the people who are at the heart of the research.
We asked health reporter Eryn Brown to share how she recently turned a medical study from Yale University into a poignant local story for the Los Angeles Times. In bringing the research home, she shined a light on the heartbreaking ways low-income mothers have to stretch diapers when they can’t afford a steady supply.
The story is part of a recent push in research to “operationalize” poverty by documenting the concrete ways income impacts health and quality of life. These kinds of studies are starting to give us a glimpse into the hardships faced by people on the fringes of society and offer reporters some meaningful stories to tell.
Read about how Brown came across the story and how she reported it.
Image by Eric Allix Rogers via flickr.
So you have a great medical study to cover – interesting topic, compelling results. All you need is an interview with the study’s authors to help bring the research home to readers.
That’s where things get tricky. The researcher you need to connect with before your oh-so-tight deadline has letters in his or her affiliation that don’t bode well for timely interviews: FDA, HHS, USDA, CMS.
Scoring an interview with a scientist who works for a government agency can be frustrating and full of dead ends. It shouldn’t be. AHCJ’s Right to Know Committee is working on improving reporters’ access to a number of government agencies.
But change is slow. And your deadlines won’t wait. What can you do today for a story that’s due tomorrow? Continue reading
Image by UGA College of Ag via flickr.
Recently, as part of a package of studies sent to reporters in advance of their annual meeting, the American Chemical Society put out an embargoed press release on a study of bedbug genes.
The study details how researchers at the University of Kentucky surveyed the entire genomes of 21 different bedbug populations collected from large cities in the Midwest.
They discovered that 14 genes work in various combinations to thwart a type of chemical that has commonly been used to kill the blood-sucking critters. What’s even more fascinating is that most of these genes are found in the insects’ tough outer shell, or cuticle. They code for proteins that pump the chemicals out of their bodies or break their molecular bonds, rendering the agents harmless.
Bedbugs, perhaps more than other insects, are masters at becoming resistant to the chemicals we use to try to kill them. That’s thought to be a major reason why they have made a comeback in homes and hotels across the country. This study went a long way toward showing why they’re so hardy and how we might be able to develop better methods to control them in the future.
The problem is that this all sounds a bit familiar to regular readers of Scientific Reports, a research publication from the publishers of Nature.
Scientific Reports published the same research on March 14. Continue reading
Image by Susan Sermoneta via flickr.
Two recent studies in the news have been clear examples of the correlation vs. causation question that’s part and parcel of covering observational research studies.
And they’re worth taking a look at because the correlations are inherently interesting, even though they almost certainly aren’t causal.
First up is “The Effect of Sexual Activity on Wages”, which was published by Germany’s Institute for the Study of Labor. Predictably, and probably in part due to its terrible title, this study generated lots of headlines like “Have more sex, make more money”, from The Wall Street Journal‘s Marketplace, and Cosmopolitan‘s “The More Sex You Have The More Money You Make“.
Well, not exactly. The study found an association between sex and wages. As self-reported sex increased, so too, did income. Of course, that doesn’t mean that having more sex causes people to make more money, a point that wasn’t stressed clearly enough in some articles for Scientific American blogger Evelyn Lamb’s tastes. Continue reading
Medical reporters are likely familiar with ClinicalTrials.gov, the U.S. government-run registry of clinical trials. The site became available in 2000, three years after Congress passed the Food and Drug Administration Modernization Act of 1997 (FDAMA), which, as the site notes,
required the U.S. Department of Health and Human Services, through NIH, to establish a registry of clinical trials information for both federally and privately funded trials conducted under investigational new drug applications (IND) to test the effectiveness of experimental drugs for serious or life-threatening diseases or conditions.
The site – and others around the world – really took off in 2005 after the International Committee of Medical Journal Editors began requiring that researchers register their trials when they started if they wanted to publish the results. Publishing in the peer-reviewed literature is the coin of the realm in academia and also vital for FDA approval, so the carrot worked, according to a 2007 update:
Before the ICMJE policy, ClinicalTrials.gov, the largest trial registry at the time, contained 13 153 trials; this number climbed to 22 714 one month after the policy went into effect (3). In April 2007, the registry contained over 40 000 trials, with more than 200 new trial registrations occurring weekly (Zarin D. Personal communication).
Registration of a trial’s plans – what researchers plan to test, and how – also means there’s a pixel trail if reporters, or any member of the public, wants to see if scientists changed the goalposts to make their results look better, or buried negative results. (Also see Ghost protocols: Scientists propose a way to plug major holes in the medical literature)
Reporters who cover medical studies often take great care not to be fooled by the spin put on research by drug companies, universities and even government agencies.
But sometimes the spin is the study itself, and that’s a serious problem. It’s a big story that’s hiding in plain sight, to borrow a phrase Steven Brill likes to use.
By some estimates, half of clinical trials are unpublished. Half. And because positive studies are far more likely to be published than negative studies – a phenomenon called publication bias – the studies that don’t get published often throw some seriously cold water on how good a treatment looks.
If this research hasn’t been published, how do we know it exists? Some of these trials have been released because of lawsuits; others can be found in standardized documents called clinical study reports that drug companies file with the FDA and its counterpart, the European Medicines Agency. Regulatory agencies use them for their reviews, but because they’re never published in medical journals, they remain hidden to the medical community and general public.
For a case in point, consider the antidepressant reboxetine. Unpublished studies that were brought to light in a stunning 2010 meta-analysis in the British Medical Journal showed that Pfizer had failed to publish data – all of it negative – on 74 percent of patients who had participated in the clinical trials of the medication.
“Not only does the drug not work, it really doesn’t work,” wrote the blogger SciCurious in a guest post for Scientific American on the reboxetine revelations. Continue reading
Image by Sweet One via flickr.
It can be tough to find a medical study that is both important and compelling. But that was the opportunity presented to health reporters this week in the shape of a big study on a humble condiment, vinegar.
What makes this study even more wonderful, in a way, is that it was presented at the American Society of Clinical Oncology, a medical meeting that’s awash in high-stakes, big money, endlessly pitched and spun drug research.
In the midst of that madding crowd was Dr. Surendra Shastri, a preventive oncologist at Tata Memorial Hospital in Mumbai who needed an inexpensive, low-tech way to screen for cervical cancer – the leading cancer killer of women in India.
He found it in the form of sterile vinegar which bleaches suspect cells white when it’s swabbed on the cervix. Continue reading
If you follow me on Twitter, you may have noticed several 140-character conniptions I had last week over coverage of a Danish study that used antibiotics to treat low back pain.
I generally feel pretty protective of health reporters. I’m in the trenches with you. I have good days and bad days, too. Deadline reporting on medical studies is tough and sometimes undervalued for the work serious, balanced coverage requires. I’m with you.
Even so, I was dismayed by most of the stories I was reading.
Reporters were trumpeting the results of two studies published in the European Spine Journal, a less influential medical journal. Continue reading