Health Equity Resources is an impressively thorough roundup of the latest news, research, and events related to health disparities and the social determinants of health. It’s curated and delivered by email twice a month by Carly Hood, a population health service fellow at the University of Wisconsin’s Population Health Institute.
What follows is just a sampling of the latest installment – the full version is nine pages and available here, along with past issues. If you’d like to join the list, send an email to Hood at firstname.lastname@example.org. Follow her on Twitter @cm_hood. Continue reading
Recently, Dr. Ben Goldacre (@bengoldacre), a prominent critic of drug studies, wanted to find out how often side effects reported by users of cholesterol-lowering drugs called statins were genuinely caused by the medications.
The study he co-authored concluded that most reported side effects of statins aren’t often due to the drugs themselves, but to other causes. The study generated front-page headlines in the U.K., with an article in The Telegraph declaring, “Statins have virtually no side effects, study finds.”
Outcry ensued. Patients who experienced side effects on statins begged to differ, and Goldacre’s fans wondered if he had suddenly gone soft on pharmaceutical companies.
In response, Goldacre penned a nuanced explanation of the study findings, explaining* that its conclusions were flawed because it was based on incomplete data.
The statin study controversy aside, his blog post makes some key points about how side effects are reported in medical journals that are helpful for health reporters to keep in mind when covering the downsides of new drugs. I’ve boiled some important points down and included them in this tip sheet for AHCJ members.
*Editor’s note: An earlier version of this post used the word “admitting.”
Image by Mark Robinson via flickr.
The military uses the phrase “the fog of war” to describe the miscalculations and botched decisions that get made in the heat of combat.
But you need not sign up for active duty to run into foggy thinking. Just call a scientist and interview them about their own research.
One of my favorite examples of this is when researchers conduct observational studies that can’t show cause and effect, yet interpret their findings to reporters as if they do. Continue reading
Photo: BlatantNews.com via Flickr
Recently, an editor sent me a study to cover on concussions in teenagers. At least, that’s what we thought the research was about, based on the title of its press release: “Teenagers who have had a concussion also have higher rates of suicide attempts.”
And I was excited to cover the study. Like gut bacteria and anything to do with chocolate or coffee or stem cells, concussion is a hot topic right now. That’s partly because brain scientists are just beginning to understand the lasting impacts of these sometimes subtle but probably cumulative injuries.
And they affect everybody from pro athletes to pee wee football players. So when parents and coaches see the word “concussion,” their thoughts rightfully turn to young athletes. About half of concussions in kids ages 8 to 19 are sports-related, according to a nationwide study of concussions published in 2010 in the journal Pediatrics.
The press release said the study found that kids who have had concussions were not only more likely to try to commit suicide, but to engage in other sorts of high-risk behaviors like taking drugs, stealing cars, setting fires and bullying.
The message here is that a kid who gets hit in the head too many times – presumably playing sports – might turn to drug abuse, self-injury and other sorts of criminal behaviors. And that’s the way it was covered in the press. Continue reading
Matthew CavanaughKaren D. Brown
When first diagnosed with breast cancer, journalist Karen D. Brown didn’t plan to write about it. But, as she met with surgeons, anesthesiologists and oncologists who presented her with treatment options, she found it was a lot more confusing than she had realized when reporting on the statistics.
All of a sudden I realized that my medical odyssey and the health news cycle had crossed orbits. I could write about my personal experience and also shed light on a bigger issue that I felt had not yet been told to death – namely, how hard it is for an individual to make decisions based on population-wide statistics, and politically loaded ones at that.
In this article for AHCJ, Brown tells us how she came to write a piece that appeared in The Boston Globe about the conflicts between statistics and emotions and how they affected her decisions.
She writes about how she chose the statistics that she included in her story, what information she did not include to avoid the appearance of a conflict of interest in her future reporting and how she made sure her narrative was fair and accurate. Read about Brown’s experience.
Jonathan Latham, Ph.D.
Remember the burger grown from stem cells? It might be a great idea, except a single patty grown using today’s technology, at least, cost a whopping $332,000.
In a new AHCJ tip sheet, Jonathan Latham, Ph.D., executive director of the Bioscience Resource Project, asks whether discoveries like that are breakthroughs or “fakethroughs” – scientific advances that will never progress to new treatments or beneficial products. He also talks about his brand of investigative science journalism and why reporting on new discoveries should probably be more muted.
He has two tips for reporters and advice about what research journalists should cover.
Spend any significant amount of time reporting on research and you’re bound to run across a real stinker of a study.
Too often, the studies that become clickbait on the web or turn up in women’s magazines – usually boiled down to a surprising health tip – are just, well, how do I put this? Crap.
There are a lot of those kinds of studies in the world. Studies that are too small to be meaningful, or they ask bad or useless questions, they’re poorly designed or they essentially answer a question that’s already been repeatedly answered.
These kinds of studies exist because the publish-or-perish culture of academia rewards volume over value. And let’s accept our part in this, too. There’s always a media outlet that’s willing to trumpet a surprising, if completely unsound, study.
In a microcosm, a bad study or two can raise an eyebrow or a chuckle. In a macrocosm, however, the situation is dire. Continue reading
The highest rates of obesity in the U.S. occur among population groups with the highest poverty rates. This apparent paradox becomes more understandable when you consider the economics.
Processed foods and products with added sweeteners and higher fat content are cheaper than nutrient-dense whole fruits and vegetables, fresh whole-grain breads, lean meats and seafood. Farm subsidies have boosted the output of cheap food and further pushed down the prices relative to healthier options. From 1985 to 2000, retail prices of fresh vegetables and fruit rose nearly 120 percent, about six times more than the rate of increase for soft drinks and three times more than that of sweets and fats.
Given these well-established findings, I was surprised by the confusion on display in the flurry of news reports on a new meta-analysis by researchers at Brown University and the Harvard School of Public Health. The researchers crunched the numbers from 27 previous studies and calculated that healthy eating costs $1.48 more per day per adult than eating a low-quality diet ($550 more annually per person). Here are some of the headlines that ensued: Continue reading
One of the most important skills required of reporters who cover medical research is the ability to find and discuss the limits of the studies we cover.
To that end, a trio of professors at Cambridge University recently published a helpful comment in the journal Nature: “Twenty Tips for Interpreting Scientific Claims.” (If you don’t subscribe, you can read the full article for free here.)
Some of my favorites (in no particular order):
- Study relevance limits generalizations – a great reminder that the conditions of any study will limit how its findings can be applied in the real world.
- Bias is rife – We talk about several types of bias in the topic section, like reporting bias and healthy user effect. The article reminds us that even the color of a tablet can shade how study participants feel. Continue reading
With mammograms in the news lately, it’s worth noting that the U.S. Preventive Services Task Force has posted its plan for reviewing and updating its recommendations for screening for breast cancer. The draft research plan lays out the “strategy the Task Force will use to collect and examine research and is the first step in updating the 2009 recommendation,” according to Ana Fullmer at USPTF. Recommendations are updated every five to seven years, so she says a new recommendation probably won’t be finished for a few years.
The panel is seeking answers about the specific benefits and harms of screening mammography for women over 40, they’re asking if benefits and risks vary by imaging technique – digital mammograms, ultrasound or MRIs; and importantly, they’re trying to find out how common ductal carcinoma in situ (DCIS) is in the U.S. and what benefits and harms are involved in treating it.
Experts recently recommended renaming DCIS to exclude the word “carcinoma” so the finding wouldn’t be so frightening to patients. DCIS is an abnormal pattern of cell growth in the milk ducts of the breast. In many cases, it doesn’t progress to cancer. Yet a growing number of women have decided to remove both breasts rather than take their chances that it isn’t dangerous.
Interested parties who want to weigh in on the draft plan are encouraged to submit comments and questions to the Task Force by Dec. 11.