Trying to write critically about a new use of artificial intelligence?
Start by asking your sources three questions:
- How far they are away from the point of delivery?
- How much data are they working with and what is the diversity or scope of the population the data was gathered from?
- And finally, what kinds of algorithms did they apply and what sorts of devices are they limited to using?
Those were some of the lessons from a panel of experts who recently gathered at The Aspen Institute to address the promise and challenges of A.I. with the D.C. Chapter of the Association of Health Care Journalists. Speakers included the National Institute of Health’s Dr. Sameer Antani, American Medical Informatics Assocation’s Dr. Douglas Fridsma, Children’s National Health System’s Dr. Marius Linguraru and AstraZeneca’s Anand Subramony.
Applications of artificial intelligence presented to the group included the use of machine-learning algorithms to help recognized slight facial deformities in infants that might lead to earlier diagnoses of genetic disorders at Children’s National Health System to applications by AstraZeneca for more quickly identifying molecules that could be successful drug targets to broader uses harnessing the power of electronic health records.
It’s crucial to push for an understanding of how close an idea is to reality and how large a change it could make in health care, suggested Fridsma, the president and CEO of the American Medical Informatics Association. “People have done pilots where they’ve got some really interesting stuff,” he said. “But there’s a big gap in our health care environment between the change layer and the reality layer.”
Reporters need to ask about the scope of the data and what kinds of algorithms researchers they been applying. “A more closed an environment becomes narrower,” said Antani, the acting chief of the Lister Hill National Center for Biomedical Communications Engineering Branch at the National Institutes of Health’s National Library of Medicine.
The diversity of the datasets are crucial to understand as well to ensure A.I. isn’t working with data that is inherently biased.
“If you train on the VA dataset — and they’ve got good datasets — you’re going to have a bunch of old white men and you’re going to miss important things about other important racial and underrepresented minorities,” Fridsma said. “As we think about these algorithms and the data, we have to be cognizant of that.”