Tag Archives: statistics

Tip sheet series to focus on red flags to look for in medical studies

Tara Haelle

About Tara Haelle

Tara Haelle (@TaraHaelle) is AHCJ's medical studies core topic leader, guiding journalists through the jargon-filled shorthand of science and research and enabling them to translate the evidence into accurate information.

With thousands of medical studies published every day, it’s impossible to cover even 1 percent of them. When you can only choose a tiny fraction of studies to cover — particularly if you freelance or your editor gives you some autonomy and flexibility in this area — how do you decide whether or not to cover a study?

Reasons can vary: Some people focus on the better known “more prestigious” journals, although that approach has its drawbacks. Continue reading

Tips help remind reporters to understand limits of the studies we cover

Brenda Goodman

About Brenda Goodman

Brenda Goodman (@GoodmanBrenda), an Atlanta-based freelancer, is AHCJ’s topic leader on medical studies, curating related material at healthjournalism.org. She welcomes questions and suggestions on medical study resources and tip sheets at brenda@healthjournalism.org.

One of the most important skills required of reporters who cover medical research is the ability to find and discuss the limits of the studies we cover.

To that end, a trio of professors at Cambridge University recently published a helpful comment in the journal Nature: “Twenty Tips for Interpreting Scientific Claims.” (If you don’t subscribe, you can read the full article for free here.)

Some of my favorites (in no particular order):

  1. Study relevance limits generalizations – a great reminder that the conditions of any study will limit how its findings can be applied in the real world.
  2. Bias is rife – We talk about several types of bias in the topic section, like reporting bias and healthy user effect. The article reminds us that even the color of a tablet can shade how study participants feel. Continue reading

Stories are waiting to be found in new stats on seniors

Judith Graham

About Judith Graham

Judith Graham (@judith_graham), is a freelance journalist based in Denver and former topic leader on aging for AHCJ. She haswritten for the New York Times, Kaiser Health News, the Washington Post, the Journal of the American Medical Association, STAT News, the Chicago Tribune, and other publications.

Older Americans 2012, a new report from the Federal Interagency Forum on Aging-Related Statistics, is an important resource for reporters on the aging beat.

Core Topics
Health Reform
Aging
Other Topics

It’s an overview of all kinds of issues affecting older adults based on data from 2009, the latest available. As such, it doesn’t reflect the full impact of the economic downturn on older Americans. But it’s still full of nuggets of interesting information.

Some items that caught my eye:

  • Fewer seniors are living in poverty. Between 1974 and 2010, the proportion of older adults with incomes below the poverty threshold fell from 15 percent to 9 percent.
  • More seniors now fall in the “high income” category. During the time period specified above, well-off older adults expanded from 18 percent to 31 percent.

Continue reading

Obesity doctor calls journalists’ statistical knowledge into question

Pia Christensen

About Pia Christensen

Pia Christensen (@AHCJ_Pia) is the managing editor/online services for AHCJ. She manages the content and development of healthjournalism.org, coordinates AHCJ's social media efforts and edits and manages production of association guides, programs and newsletters.

Yoni Freedhoff, M.D., founder of Ottawa’s Bariatric Medical Institute, writes about two studies about obesity and questions whether journalists are skilled enough in statistical analysis to accurately report on them.

Freedhoff says a new report refutes an earlier study – published in the New England Journal of Medicine and widely reported by the media – as being statistically flawed. And he is skeptical the new study will receive attention from the journalists who reported the first study.

The original study, “The Spread of Obesity in a Large Social Network over 32 Years” by Nicholas A. Christakis, M.D., Ph.D., M.P.H., and James H. Fowler, Ph.D., was widely reported with headlines proclaiming that “Obesity is socially contagious” in 2007.

A new study by Indiana University’s Russell Lyons, published in Statistics, Politics, and Policy, claims “the assumptions behind the statistical procedures used were insufficiently examined.”

As Freedhoff notes, the NEJM has an impact factor of 50, while Statistics, Politics, and Policy has an impact factor of 0.857, leading one to wonder how many reporters have even heard of the new study.

But Freedhoff – who admits he’s no statistics expert – questions whether journalists will report on the new study because they do not have the statistical knowledge to do so.

All in all, even if you’re not a statistician, Lyons’ paper is worth a sober read and reflection, and here’s something else to chew on – the journalists who were originally all over Christakis’ and Fowler’s work? I’d bet every last penny I’ve got that not a single one of them were skilled enough in statistical analysis to analyze it. Really, why should they have been? They’re journalists, not statisticians. No, instead they smelled a good story, and ran with it. Those same journalists who shouted from the rooftops that obesity’s contagious? I’m betting the vast majority of them are going to be silent on this one, yet wouldn’t re-reporting be the socially responsible, ethical, and journalistic right thing to do?

Update: Brian Reid found this paper, “Examining Dynamic Social Networks and Human Behavior,” that appears to be a response to Lyon’s research – Christakis and Fowler reference his critique specifically at least twice in the paper.

So, reporters, let’s hear what you think: Do you know enough about statistics to analyze and report on the new study? Or were you even aware of the new study?

Covering Medical ResearchIt’s certainly worth pointing to AHCJ’s most recent slim guide here: Covering Medical Research, which helps journalists analyze and write about health and medical research studies.

It offers advice on recognizing and reporting the problems, limitations and backstory of a study, as well as publication biases in medical journals and it includes 10 questions you should answer to produce a meaningful and appropriately skeptical report. This guide, supported by the Robert Wood Johnson Foundation, is a road map to help you do a better job of explaining research results for your audience.

An earlier slim guide, “Covering Obesity: A Guide for Reporters,” also might come in handy for covering the topic.

Critics point out issues in patient satisfaction ratings

Andrew Van Dam

About Andrew Van Dam

Andrew Van Dam of The Wall Street Journal previously worked at the AHCJ offices while earning his master’s degree at the Missouri School of Journalism.

On the heels of a government proposal to tie hospital incentive payments to patient satisfaction ratings, a few outlets have started looking at the validity of such measurements.

At HealthLeaders Media, Cheryl Clark reports that regional differences in tendency to be satisfied (the numbers show that New Yorkers are harder to please than Midwesterners and New Englanders, for instance) mean that any absolute number thresholds issued by the feds would penalize hospitals in parts of the country where folks are less likely to respond well to surveys.

And on KevinMD.com, William Sullivan, D.O., J.D., takes a few swings of his own, first taking aim at the ratings’  sampling and statistical grounding, then moving on to what he says is hospitals’ over-reliance on percentile quality ratings.

The problem, according to Sullivan? Overall patient satisfaction is quite high, thus doctors’ ratings cluster tightly around the low 90s on a 100-point scale. That means even a small shift in absolute rating will cause a huge jump in percentile. On at least one system, a 4-percentage-point absolute drop will take a doctor from the 90th percentile to the 50th. And, thanks to the aforementioned sampling issues, that drop can be caused by a handful of particularly ornery patients. Patients who, Sullivan writes, are thus given massive leverage.

With our employment and our compensation hinging on every “5” we can get, doctors are being coerced into giving patients whatever they want, regardless of medical appropriateness. When we cater to satisfaction scores more than we cater to proper medical care, we are violating our oath, devaluing our education, and potentially harming our patients.

AHCJ Resource:Analyze patient satisfaction surveys for your local hospitals

“Numbers can be a start – not the end – of a story,” the AHCJ website notes. Remember that patient satisfaction scores only mean so much. Sometimes the best doctors have gruff demeanors while those with inferior skills have great bedside manners. Patients may not recommend hospitals to friends because they dislike the food or think their roommates were too loud.  But if patients report that doctors or nurses didn’t communicate well, that very well could affect the care the patients received. Using data can give you a valuable tip sheet to generate ideas and questions in your pursuit of a story.

For hospital overall survey results, AHCJ includes comparison of data first released in March 2008 then updated quarterly, allowing journalists to compare overall survey results over a lengthy timeline.

How numbers can be used to buttress falsehoods

Andrew Van Dam

About Andrew Van Dam

Andrew Van Dam of The Wall Street Journal previously worked at the AHCJ offices while earning his master’s degree at the Missouri School of Journalism.

On The New York Times‘ Well blog, Tara Parker-Pope interviewed NYU journalism professor Charles Seife, author of Proofiness: The Dark Arts of Mathematical Deception. While the book’s not exclusively focused on health care, the interview does touch upon numbers and health journalism.

Once you get past all the goofy catchphrases (proofiness! randumbness!), the basic point Siefe makes in the interview, that correlation is not causation, shouldn’t surprise anyone. Nevertheless, I enjoyed his elegant, health-related illustration of the phenomenon:

We are extraordinary pattern-matchers. Anytime there is something that is happening, we try to find a cause. But sometimes in medicine, sometimes things are absolutely random. Our minds don’t accept that. We must find a cause for every effect.

A really good example is the autism issue. Whenever a parent has a child who ends up being autistic, the parent more than likely says, “What caused it? How did it happen? Is there anything I could have done differently?” This is part of the reason why people have been so down on the M.M.R. vaccine, because that seems like a proximate cause. It’s something that usually happened shortly before the autism symptoms appeared. So our minds immediately leap to the fact that the vaccine causes autism, when in fact the evidence is strong that there is no link between the M.M.R. vaccine or any other vaccines and autism.

One caveat: Covering Health is not in the book review business, and I haven’t yet read Proofiness beyond what’s been excerpted.