Renee FabianExpires January 31, 2020 | Log out

Join or renew today

Resources: Articles

Evidence-based medical reporting: A brief primer Date: 06/07/07

by Barbara Gastel, M.D., M.P.H.
Coordinator, M.S. Program in Science and Technology Journalism
Texas A&M University

This primer, adapted from a workshop at an AHCJ conference, is intended mainly to help health care journalists find, read, and evaluate journal articles that report medical research. The main topics touched on are literature searching, study design, and biostatistical concepts. The primer also includes other tips and lists additional readings.

Additional resources

Tip sheets and presentations from the workshop Medicine 101: Words, numbers and journals at Health Journalism 2007. Includes information about understanding and using medical language, what you need to know about risks, rates and ratios, statistical errors and understanding medical publications.

Tips from Gastel's presentation at Health Journalism 2006

Lists of prefixes, suffixes, and roots (from Murray Jensen, University of Minnesota)

Online dictionary

Merck Manual Home Edition

American Medical Association Manual of Style, 10th edition (2007)

Health Writer’s Handbook, 2nd edition (2005)

Literature Searching: PubMed

PubMed, an online bibliographic database from the US National Library of Medicine, is the main means of searching the medical journal literature. This easy-to-use resource can be accessed at www.pubmed.gov. For research articles, it generally includes abstracts (summaries), which can aid in deciding whether to obtain the entire article. In some cases, links also exist to the full text of articles.

PubMed often identifies many, many articles on the topic of interest. To narrow the list, you can set limits, by clicking on the "Limits" tab and checking relevant boxes. Among items on which limits can be set are:

  • language (for example, English only)
  • type of article (for example, reports of randomized controlled trials, review articles, or practice guidelines)
  • "tag terms" (for example, whether the term being searched for appears in the title)
  • whether the article pertains to humans or to animals
  • whether the article was published in a core clinical journal
  • when the article was published
  • what ages it pertains to
  • whether links to full text are available

An example: On the date this primer was drafted, a PubMed search using the term "influenza" yielded 42,548 listings. Then the following limits were set:

  • English language
  • randomized controlled trial
  • title
  • humans
  • core clinical journal
  • published in the past five years
  • all adult: 19 years
  • links to free full text available

The result: a much more manageable nine articles.

Because PubMed focuses on the journal literature, most of the articles are relatively technical. Especially if you are unfamiliar with a topic, it can be helpful to start with reading that is more general. Two suggestions:

  • Begin with overviews intended for general readerships. Such overviews can readily be accessed through MedlinePlus (another resource of the National Library of Medicine). This site includes both materials from the National Institutes of Health and links to other materials that meet its standards.
  • As a next step, read one or more review articles in journals. Review articles - which summarize the research literature on a topic - tend to be an efficient way to survey the literature before focusing on specific studies. In the example regarding articles on influenza, changing the list of limits to include "review" instead of "randomized controlled trial" yielded six articles summarizing what is known about aspects of the topic.

To gain the most from research articles, it is helpful to know how such articles are structured. Typically, such articles begin with an Abstract and then follow the IMRAD format:

Introduction
Methods
Results
(And)
Discussion

Knowing this structure can aid in looking efficiently at an article. One strategy that often works well is to:

  • Read the Abstract to see whether the article is of potential interest. If the article seems promising, then:
  • Read the Introduction and Discussion to help put the study in context and understand its implications. If the study still seems potentially worth writing about, then:
  • Read the Methods and Results, to help determine (1) whether the study seems to be of sufficient quality to report on (See the section below on study design) and, if so, (2) what information to include about what was done in the research and what was found.

Some additional pointers for searching and using the medical-journal literature:

  • If you notice that the reference lists of several articles include a certain article, consider obtaining that article. Frequent citation tends to mean that other researchers consider the research important.
  • Some medical journals publish editorials that discuss research articles in the same issue. Look for such editorials, which can aid in understanding the research articles and placing them in context.
  • Realize that the newest articles might not yeet be included in PubMed. Interviews with experts on your topic can help in identifying the latest literature.
  • For guidance in literature searching, consult a librarian at a local or regional medical library. (Such librarians tend to be very willing to help.) Also see whether a local or regional medical library offers instruction in searching the medical literature.

Study Design: Some Major Types

Evidence-based medical reporting entails reporting on scientific studies addressing medical questions. After identifying a study - for example, through a PubMed search or news release - a next step is to evaluate it. Doing so can help determine whether the study merits reporting and, if so, what to say. Recognizing and analyzing the study design are central to evaluating a study.

One major type of study, the randomized controlled clinical trial, is the "gold standard" for comparing interventions - for instance, for determining whether one drug is more effective than another (or more effective than no treatment at all). Three important characteristics of a randomized controlled clinical trial are:

  • inclusion of a control (comparison) group: Without such a group, one cannot confidently say whether the findings after the intervention are actually an improvement over what otherwise would have occurred.
  • randomization: To avoid bias, the people being studied should be randomly assigned to the experimental group and control group. Otherwise, for example, the researchers might tend to put sicker people in one group or the other.
  • blinding: If possible, neither the researchers nor the people being studied should know who is receiving which intervention. Otherwise, they might be biased in reporting the outcome. (Sometimes blinding is not possible, for example when a surgical and a non-surgical treatment are being compared.)

Items to check when considering reporting on a clinical trial include the following:

  • Was the choice of control intervention reasonable? For example, was the new drug compared with the current drug of choice? Or was it compared with a drug no longer used (in which case the new drug might seem more advantageous than it is)?
  • How was randomization done? Were steps taken to ensure that assignment of groups was truly random?
  • What blinding, if any, was there? For example, if use of a drug being evaluated was compared with use of no drug, did the people not receiving the drug receive a placebo (which looked like the medication but lacked the active ingredient)?
  • How thorough was the follow-up? In particular, what were the dropout rates? (If many of the people who enrolled in the study did not finish it, it may be difficult to conclude much about the effectiveness of the intervention.)
  • What end points were assessed? Were they end points that definitely mattered? For example, did the researchers determine how many people survived how long, or did they only determine how many people had improved laboratory values (which might not necessarily be associated with longer survival)?
  • How logical are the conclusions? Do the researchers' conclusions follow logically from the findings? Or, for example, does the article present the intervention as being more beneficial than the results would indicate?

Another type of study is a cohort study. This type of study follows one or more groups of people over time. It can be prospective (for example, a study that evaluates the health status of a group today and reevaluates it yearly for the next 20 years) or retrospective (for example, a study based on health care records that have been kept over the past 20 years). Cohort studies can be used to characterize typical changes over time, such as the those associated with aging. They also can help identify associations of lifestyle factors with outcomes - for instance, links between diet and eventual development of diseases.

Questions to consider when reporting on a cohort study include the following:

  • How complete was the follow-up? (If, for example, there were many dropouts, it is hard to conclude with confidence that the findings are widely applicable.)
  • How reliable are the observations? For example, were items directly observed by trained observers, or were they self-reported?
  • How representative of other populations was the cohort? (If, for example, the group differed considerably from your readership, you might rightly question whether the findings would necessary apply to it.)

Yet another type of study is the case-control study. In this type of study, the researchers compare people with and without a given condition, in the hope of identifying factors affecting the likelihood of developing it. For example, they may compare rates of cigarette smoking in people with and without a rare type of cancer, in order to see whether cigarette smoking seems to predispose people to developing that type of cancer. Case-control studies tend to be used when other, more convincing types of studies are not ethical or feasible. (To continue with the example: One could not ethically randomize people to smoke or not to smoke, and so a randomized trial would not be acceptable. And because the type of cancer is rare, a very large cohort of people would need to be studied, and so a cohort study might not be feasible.)

In deciding whether a case-control study merits reporting, one item to consider is whether, except for the medical condition, the cases and controls are similar. For example, are the people with and without the rare cancer of similar age, and do both groups have a similar mix of genders? If the groups differ considerably, less of a case can be made that the suspected factor indeed increases likelihood of developing the condition.

Other types of studies include the following:

  • crossover studies: In crossover studies, people serve essentially as their own controls. For example, a person may receive one treatment, then another, and then the first again, meanwhile being observed while receiving each.
  • cross-sectional studies: In cross-sectional studies, people are studied at a single point in time. For example, they may all be surveyed by telephone on the same date, or they may all receive physical examinations during the same week.
  • case series: In a case series, the findings in several (or more) people with a disease are described. Case series can be especially suitable for characterizing a newly discovered disease.
  • meta-analysis: A meta-analysis is a study in which findings from various pieces of research on the same question are combined using rigorous procedures. The goal is to draw conclusions that are more credible than can be obtained from any of the individual studies.

Some resources listed at the end of this primer can aid in evaluating these and other types of studies.

Some Biostatistical Concepts

Understanding five basic statistical concepts - statistical significance, P values, confidence intervals, power, and relative risk - can especially aid in evidence-based medical reporting.

Put simply, statistical significance is the likelihood that findings reflect more than chance. A statistically significant association, however, is not necessarily a causal association. For example, there might be a statistically significant association between carrying a cigarette lighter and developing lung cancer. However, it seems unlikely that the cigarette lighter itself causes lung cancer. Also, statistical significance does not necessarily imply practical importance. For instance, researchers might find that the difference in average amounts of weight lost on two diets is statistically significant. However, if that difference is only a pound or two, it would have little practical importance.

The P value is an indicator of statistical significance. It reflects the probability that findings occurred just by chance. The lower the P value, the lower the chance that the findings were a fluke. For example, if a P value is .50, chances are just 50:50 that the findings reflect a true difference. If, however, the P value is .05 (a commonly used cutoff point for statistical significance), chances are 1 in 20 (5 in 100) that the findings are spurious. And if the P value is .001, these chances are only 1 in 1000, thus making the credibility much stronger.

A confidence interval, put simply, is the range in which the true value is likely to lie. It is analogous to the margin of error reported for surveys. Let's say that statistical analysis leads to an estimate that smokers are 3.5 times as likely to have a certain disease as nonsmokers are. If the confidence interval around this estimate extends from 2.0 to 5.0, it is very likely that smokers are somewhere from twice to five times as likely as nonsmokers to have the disease.

Power is the ability of a study to detect an effect if one is present. Especially if relatively few people are studied, a piece of research may be unable to detect an effect even if one exists. For instance, if a side effect of a drug is rare, it may go undetected in a small or medium study. If a journal article about a study says no statistically significant difference existed between groups, check into the study's power. In particular, see whether the journal article discusses the study's power, and if you are interviewing the investigators, ask about it. Remember: Absence of proof is not proof of absence.

Relative risk indicates how strongly, if at all, a given factor is associated with the likelihood of a given condition. For example, if 10 percent of AHCJ members have disease X disease but 2 percent of non-members have it, the relative risk associated with AHCJ membership is 5. When reporting relative risk, be sure to report the absolute risks (for instance, 10 percent and 2 percent) as well. For example: A relative risk of 5 could reflect, as above, absolute risks of 10 percent and 2 percent-but it could reflect absolute risks of 100 percent and 20 percent, or of .010 percent and .002 percent. The relative risk would be the same, but the practical implications could be quite different. (Note: The 3.5 in the paragraph on confidence intervals also is a relative risk.)

Ten More Tips for Evidence-Based Medical Reporting

Some additional tips for evidence-based medical reporting:

1. Consider the source. Where was the research published or presented? What are the authors' affiliations? Who funded the research? These factors can affect credibility. For example, research published in a leading journal is more credible than that reported in a newsletter or presented at a conference. Knowing where the authors worked and who funded the research can aid in identifying conflicts of interest and thus potential biases.

2. Consider consistency. See whether various observations within the study all lead to the same conclusion. Also see whether the findings of the current study agree with those of other studies of the same topic. Greater consistency increases confidence in the conclusions drawn.

3. When looking at reports from surveys, check response rates. If response rates are low, beware. The people who completed the survey may well differ in important ways from those who decided not to.

4. Make sure the appropriate type of average is used. Sometimes, the mean (arithmetic average) is valid. But if the distribution is skewed, another measure, such as the median (middle value) may be more appropriate. For example, if most health care journalists earn about the same amount but a few earn several times as much, stating the mean income may misrepresent how much health care journalists typically earn; stating either the median or the interquartile range (from the 25th to 75th percentiles) might give a more representative picture. And describing the overall distribution might be most helpful of all.

5. Be sure that percent change is computed correctly. The initial value should be used as the baseline. Thus, for example, an increase from 100 to 120 is an increase of 20 percent, a decrease from 100 to 80 is a decrease of 20 percent, and an increase from 80 to 100 is an increase of 25 percent.

6. Do not assume that screening for early detection of a disease always is of value. For example: If screening is associated with a longer time between disease detection and death, do people actually live longer-or, for example, do they die when they otherwise would but just know for longer that they have the disease? Also, does the screening sometimes detect cases of a disease that might never have caused a problem?

7. Remember that correlation does not imply causality. Things can be associated without being causally linked. For example, many things have increased since the Association of Health Care Journalists was born in 1997. Few of the increases, however, have been caused by the AHCJ's growth.

8. Seek alternative interpretations of findings. Do the findings necessarily mean what the researchers suggest? What about other possible explanations? Consider alternative interpretations yourself, and ask about them when you interview experts.

9. Consider the extent to which findings can be generalized. How applicable might the findings be to those other than the study participants? For example, if a study was done in college students, do the findings seem to apply to typical newspaper readers? If a study was done in hospitalized patients, how applicable to outpatients do the findings appear to be? What do you think? What do researchers in the field think?

10. If using graphics, make sure that they do not misrepresent the data. For example, setting the baseline of a bar graph at 50 percent instead of 0 percent makes differences seem larger than they are.

Suggested Reading

The following relatively non-technical books explore further some of the topics addressed in this primer. Sections that may be especially relevant are noted.

  • Health Writer's Handbook, 2nd edition, by Barbara Gastel. Blackwell Publishing, 2005. (See especially Chapter 6, "Evaluating Information.")
  • A Field Guide for Science Writers, 2nd edition, edited by Deborah Blum, Mary Knudson, and Robin Marantz Henig. Oxford University Press, 2006. (See especially Chapter 2, "Reporting from Science Journals;" Chapter 3, "Understanding and Using Statistics;" and Part 4, "Covering Stories in the Life Sciences.")

Detailed guidance appears in

  • How to Report Statistics in Medicine, 2nd edition, by Thomas A. Lang and Michelle Secic. American College of Physicians, 2006. (See especially Part II, "Guidelines for Reporting Research Designs and Activities.")