Lots of challenges have faced medical publishing as the Internet has evolved. From predatory journals to the rise of open access journals to the simple fact that the stacks and stacks of physical paper journals are depleting, removing a long-time key funding source.
In one recent article – ironically enough in the journal Circulation: Cardiovascular Quality and Outcomes – Harlan M. Krumholz, M.D., describes nine “deficiencies in the current model that fuel the sense that journals as we have known them are approaching their final act.” Continue reading
Photo: Rama via Wikimedia Commons
A common type of bias that plagues medical research across all journals is publication bias: studies that find positive results are considered more interesting and therefore more likely to be published.
Positive findings about drugs in particular tend have a higher chance of ending up in a journal than those that didn’t – especially among industry-funded studies – but publication bias tends to appear across the board.
That’s what makes the open-access Journal of Negative Results in Biomedicine so interesting, and helpful for journalists. The most common word you’ll find in the titles of these studies is “not.” Continue reading
If it seems the newest studies are always reporting some new link – an association between two things or an increase or decrease in this, that or the other – it’s not your imagination.
Positive findings, those which find … “something,” tend to end up in journals more often. But a recent study in PLOS ONE suggests that this trend has decreased, thanks to a change in trial reporting standards around the year 2000. Continue reading
White papers can be useful tools for journalists. Ideally, they provide authoritative, in-depth information from government or nonprofits about specific policy, diseases, programs, or issues. However, they can also be powerful marketing tools, used by corporations to position a specific product or service as the “solution” to whatever the “problem” is.
Then there is the white paper released by a nonprofit, but developed with corporate financial support. Continue reading
One of the most important skills required of reporters who cover medical research is the ability to find and discuss the limits of the studies we cover.
To that end, a trio of professors at Cambridge University recently published a helpful comment in the journal Nature: “Twenty Tips for Interpreting Scientific Claims.” (If you don’t subscribe, you can read the full article for free here.)
Some of my favorites (in no particular order):
- Study relevance limits generalizations – a great reminder that the conditions of any study will limit how its findings can be applied in the real world.
- Bias is rife – We talk about several types of bias in the topic section, like reporting bias and healthy user effect. The article reminds us that even the color of a tablet can shade how study participants feel. Continue reading