I wrote in a previous blog about the importance of understanding confounding by indication and being sure to ask researchers about it when covering observational studies that appear to suggest a particular treatment or intervention might contribute to a specific effect. I’m passionate about this type of study bias because not considering it — which happens a LOT — can lead people to decline otherwise helpful treatments or leave them experiencing more harm and pain because of unfounded fears.
In the previous example I discussed, a belief that induction of labor increases the risk of cesarean delivery — which much research has appeared to support — may lead a pregnant woman to avoid induction when, in fact, waiting for spontaneous labor might actually be the riskier choice. This isn’t a settled question in OBGYN research, but the possibility that common wisdom (inductions increase cesarean risk) is wrong has substantial implications for how labor and delivery are managed.
Ideally, conscientious researchers should be considering this in their study design and analysis already and including relevant discussion in their limitations section. All too often, however, they don’t. So journalists must — and must be prepared to push when researchers aren’t so willing to consider this type of bias. That happened to me when I covered a study a few years ago about the possible risks of depression from hormonal birth control.
In short, the study found a correlation between hormonal contraception prescriptions and antidepressant prescriptions, from which the author concluded hormonal contraception likely causes depression in some women. Though the paper discussed dose-response relationships, they were inconsistent and led me to question whether the apparently increased risk of depression was actually related to the birth control. (If it was, then, in theory, higher doses of hormonal birth control should have correlated with a higher risk and possibly severity of depression, but the data did not show that.)
And it wasn’t hard to think of a half dozen possible factors that contributed to confounding by indication in this study: Common times when females begin using hormonal contraception include adolescence (for acne, normalizing menstrual cycles or beginning sexual activity) and major relationship changes (starting a new relationship, getting a divorce, re-entering the dating pool after a partner’s death, etc.). These also happen to be times when depression risk is heightened. Beginning birth control could also be associated with sexual assault, which is common enough that it might actually influence the findings of a correlation study like this.
So, I asked the researcher, what if the reason these women sought contraception prescriptions is the same reason they may be at higher risk for depression? His answer left my jaw on the floor: Since sex was fun and always associated with positive experiences, it didn’t make sense that starting to have sex could have anything to do with depression. (He appeared to think — though it was hard for me to tell — that sexual assault would not be common enough to be related.) I was so stunned that I asked the question again multiple ways and pressed him. (I admit to asking at one point whether he actually knew any women in real life. Maybe not a best practice, but neither is expecting that every woman enjoys sex in all circumstances.)
Needless to say, he didn’t see confounding by indication as a possible issue at all, and I would have felt I was neglecting my job as a journalist if I didn’t consider it and get outside perspectives on how it might factor into the findings. (My conclusion in that article: It’s complicated.)
My point, again, is that journalists need to give thought to confounding to indication even when — especially when — the researchers themselves have not. This is easier to do the better you know your topic, but even if you don’t, it’s worth adding to your regular questions of outside sources, “What else could have caused these outcomes?”