Navigating the brave new world of artificial intelligence

You might be receiving a lot more PR pitches about artificial intelligence (AI) in your inbox these days. Gideon Gil, managing editor of Stat, has. Gil moderated a panel at Health Journalism 2018 on AI that aimed to help reporters and editors distinguish between hype and reality.

Briefly, AI is an artificial system that can perceive its environment and takes independent action to produce a result. AI products typically demonstrate behaviors associated with human intelligence such as learning, planning, movement and problem solving.

Some of the most significant breakthroughs in AI have only happened in the past five to seven years, said panelist Igor Barani, M.D., director of the Barrow Artificial Intelligence Center and associate professor of neuro-radiation oncology at the Barrow Neurological Institute.

Barani presented a fascinating slide presentation on the recent history of AI, machine learning and the development of artificial neural networks, which starting in 2015 began outperforming humans on pattern recognition tasks. Teaching computers to engage in so-called “deep learning” – learning from data that is unstructured and without prompts or specific programming by humans – has taken much trial and error.

“The inspiration for deep learning came from children and how children learn,” Barani said.

In 2012, researchers at Stanford University and Google connected 16,000 computer processors, which then taught themselves to identify cats in YouTube videos. The New York Times covered this advancement in AI at the time.

Since then, the ability for computers to learn classifications has moved on from cats and into other realms, including health care. One example is a lung cancer screening tool that learns to detect nodules on chest X-rays and identify abnormalities with high accuracy by comparing the image to an extensive database of normal and malignant nodules. This type of AI program could shorten the time between imaging, diagnosis and treatment of cancers, as well as reducing false positives, Barani said.

While AI holds the potential to improve patient care, journalists should be wary of the hype around AI, said Casey Ross, national correspondent for Stat.

Ross co-wrote an investigative series on IBM’s supercomputer Watson that described how it had fallen short of expectations on oncology performance and is “still struggling with the basic step of learning about different forms of cancer.”

Ross recommended that reporters speak with clinicians who are using AI products to find out effectiveness, as well as quirks. Patient outcomes, safety and costs were other questions that Ross asked about IBM Watson and some answers he found lacking.

“Just keep asking questions and demanding answers,” Ross advised.

Another critical point is the quality of data used in AI products, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin Law School.

Because of regulations and the siloed nature of health care data storage, obtaining quality data across health systems is extremely difficult, Ossorio said. “There are a lot of things in the data that can lead to a situation where the algorithm is trained in the data set, but some of that data is irrelevant or misleading.”

Reporters should ask questions such as:

  • How was the algorithm trained?
  • On what data sets was it trained?
  • What kinds of things were found in this process?
  • What was the cleaning process for these data and how did these data change as a result?

Leave a Reply