Member? Log in...

Join or renew today

Resources: Articles

Reporting on the confusion over medical tests and the consequences Date: 06/08/15


Beth Daley

Beth Daley, an investigative reporter at the New England Center for Investigative Reporting, describes how she reported on the confusion over a new genetic test for pregnant women. The screening tests’ accuracy were being overstated by companies selling the tests and doctors and patients misunderstanding of the test results meant that women ran the risk of deciding to abort a healthy fetus.

By Beth Daley

A question nagged at me a year after I had finished a story about suspect Lyme disease tests: Who was in charge of making sure any diagnostic test was accurate?

There are two types of medical tests that a person might undergo: a screening and a diagnostic test. The results of screening tell a patient how likely they are to have a particular condition; screenings are about risk. A diagnostic test, however, actually diagnoses a condition.

I began reading articles, FDA guidance, legal opinions etc. about a class of tests called laboratory-developed tests. It soon became clear that a small exemption in FDA regulation created decades ago had been discovered by commercial labs in the last eight years and given them a way to offer thousands of tests to patients without having to prove the test accurately diagnosed a condition.

Yet those test results often guided doctors to prescribe everything from antibiotics to chemotherapy. There was no easy way for patients and doctors to distinguish a good test from a bad test. I began looking for a test I could use to explain this problem to readers.

Looking for a test case

I spent weeks calling doctors around the country asking if they came across any tests they weren’t quite sure were reliable. I got an onslaught. I eventually found one cancer test to focus on, and settled in to read all the scientific literature supporting it via PubMed and a university medical library (to get free copies of the studies). I didn’t know a lot about studies but asked doctors for a few key things to look for (i.e. randomized, double blind studies) and looked up some basic accepted practices necessary for publication in reputable journals.

In the end, I could find no robust, peer-reviewed studies indicating that this particular test worked. I was curious why this test was being reimbursed by the Centers for Medicare and Medicaid Services (CMS).

I hit a brick wall – and remain there – for two reasons: It was challenging to show harm from this test and CMS denied my public records request for documents to better understand their decision process to reimburse its use. My appeal to that denial remains pending, more than a year later.

A lucky break

So I continued looking for another test case, but was getting nowhere until an off-the-record conversation with a high-ranking regulatory federal official. It took a lot of wrangling and effort to get that meeting, but after we met and built some trust, I was introduced to a source who knew the genetic testing world well, especially prenatal genetic testing.

Prenatal genetic tests have been taking the obstetrics world by storm. With a simple blood draw as early as ten weeks into a pregnancy, a woman could know the risk of her fetus having Down syndrome or a growing list of other chromosomal conditions.

It wasn’t a bad test: it was so much better than the ultrasound and blood tests that millions of pregnant women had relied on for decades. Soon, however, it became clear why he had told me to focus on it: Baby comment boards were filled with posts that showed hundreds of women were deeply confused about the tests and what the screening results meant.

Patients, doctors not getting the whole story

Doctors rely on sensitivity – the ability to detect a condition in a group of people – and specificity – the ability to correctly exclude that condition. These two concepts comprise a robust part of the science of tests, and these new prenatal tests had very high sensitivity and specificity. But that wasn’t the whole story.

What the companies were not advertising was a particularly important statistic, especially for very rare conditions: the positive predictive value. Positive predictive value answers a key question: if a test comes back positive, what is the likelihood that test result is correct? It turned out that for some conditions, particularly in women at low risk for a condition, a positive test result could be wrong 50 percent or more of the time.

Yet companies were not telling women – or doctors –this fact. In fact, company salesmen would use words like “near-diagnostic” or “nearly diagnostic,” giving the impression the tests were almost always right. The tests were screening women for their risk of a condition, but the women and sometimes even the doctors were treating the tests as though they actually diagnosed the condition, something that requires a different type of more invasive procedure, such as an amniocentesis.

Because of these false positives, there were isolated cases of women aborting healthy fetuses that were mistakenly believed to have had a birth defect or other serious condition. It was hard to pin down how pervasive was the problem because most women who abort typically don’t have the fetal material tested. One group at Stanford had documented three cases, but I was denied access to talk to the women. I searched for two months to get a woman to talk on the record about her experience. Finally, a courageous mother, who almost aborted her fetus and later gave birth to a healthy child, agreed to talk.

It was very hard to find both the science and the human part of the story. The field is moving at a lightning pace, so as I was writing, new, complex papers were coming out almost weekly. Most were created by the test companies so it was challenging to see behind the promotion.

Lessons learned

The best lesson I learned was to ask for help off the record from scores of health professionals and other journalists, which allowed me to compare my information to what was happening in other types of healthcare and in other countries.

The information I gleaned didn’t always make it into print, but it helped me write with authority. Also helpful (but which took a lot of effort to obtain) was getting my main patient’s medical record. That enabled me to go beyond just her recollection to also document her doctor’s confusion over the tests.

I also learned how to question and parse out what appeared to be complex science by finding some basic metrics against which I could compare scientific papers. I also sought out people who could help me understand the differences between studies, such whether a study was really a randomized trial, for example.

I also knew when to give up. In a perfect world, I really would like to have interviewed a woman who aborted a healthy fetus because of these tests. I tried for a long time. But the sensitivity of the issue, along with the fact that many women don’t even know in the end – along with a desire to get the information out as soon as possible – required me rely on a woman who almost had an abortion.


 

Beth Daley (@BethBDaley) is an investigative reporter at the New England Center for Investigative Reporting.