Artificial Intelligence in Healthcare: A Post-Pandemic Prescription – MedPage Today

Original article was published on artificial intelligence

In what now seems a distant pre-pandemic period, excitement about the potential of artificial intelligence (AI) in healthcare was already escalating. From the academic and clinical fields to the healthcare business and entrepreneurial sectors, there was a remarkable proliferation of AI — e.g., attention-based learning, neural networks, online-meets-offline, and the Internet of Things. The reason for all this activity is clear — AI presents a game-changing opportunity for improving healthcare quality and safety, making care delivery more efficient, and reducing the overall cost of care.

Well before COVID-19 began to challenge our healthcare system and give rise to a greater demand for AI, thought leaders were offering cautionary advice. Robert Pearl, MD, a well-known advocate for technologically advanced care delivery, recently wrote in Forbes that because technology developers tend to focus on what will sell, many heavily marketed AI applications have failed to elevate the health of the population, improve patient safety, or reduce healthcare costs. “If AI is to live up to its hype in the healthcare industry the products must first address fundamental human problems,” Pearl wrote.

In a December 2019 symposium addressing the “human-in-the-middle” perspective on AI in healthcare, internationally acclaimed medical ethicist Aimee van Wynsberghe made the case that ethics are integral to the product design process from its inception. In other words, human values and protections should be central to the business model for AI in healthcare.

Health equity should be a driving principle for how AI is designed and used; however, some models may inadvertently introduce bias and divert resources away from patients in greatest need. Case in point, a predictive AI model was built into a health system’s electronic health record (EHR) to address the issue of “no-show” patients by means of overbooking. Researchers determined that the use of personal characteristics from the EHR (ethnicity, financial class, religion, body mass index) could result in systematic diversion of resources from marginalized individuals. Even a prior pattern of “no-show” was likely to correlate with socioeconomic status and chronic conditions.

Fast forward to today when AI seems to be a permanent fixture in national news coverage. Noting that journalists often overstate the tasks AI can perform, exaggerate claims of its effectiveness, neglect the level of human involvement, and fail to consider related risks, self-professed skeptic Alex Engler offered what I believe are important considerations in his recent article for the Brookings Institution. Here are a few:

  • AI is only helpful when applied judiciously by subject-matter experts who are experienced with the problem at hand. Deciding what to predict and framing those predictions is key; algorithms and big data can’t effectively predict a badly defined problem. In the case of predicting the spread of COVID-19, look to the epidemiologists who are building statistical AI models that explicitly incorporate a century of scientific discovery.
  • AI alone can’t predict the spread of new pandemics because there is no database of prior COVID-19 outbreaks as there is for the flu. Some companies are marketing products (e.g., video analysis software, AI systems that claim to detect COVID-19 “fever”) without the necessary extensive data and diverse sampling. “Questioning data sources is always a meaningful way to assess the viability of an AI system,” Engler wrote.
  • Real-world deployment degrades AI performance. For instance, in evaluating CT scans, an AI model that can differentiate between healthy people and those with COVID-19 might start to fail when it encounters patients who are sick with the regular flu. Regarding claims that AI can be used to measure body temperature, real-world environmental factors lead to measurements that are more imperfect than laboratory conditions.
  • Unintended consequences will occur secondary to AI implementation. Consolidation of market power, insecure data accumulation, and surveillance concerns are very common byproducts of AI use. In the case of AI for fighting COVID-19, the surveillance issues have been pervasive in countries throughout the world.
  • Although models are often perceived as objective and neutral, AI will be biased. Bias in AI models results in skewed estimates across different subgroups. For example, using biomarkers and behavioral characteristics to predict the mortality risk of COVID-19 patients can lead to biased estimates that do not accurately represent mortality risk. “If an AI model has no documented and evaluated biases, it should increase a skeptic’s certainty that they remain hidden, unresolved, and pernicious,” said Engler.

Based on what we’ve learned about the limitations and potential harms of AI in healthcare — much of which has been amplified by COVID-19 — what treatment plan would I prescribe going forward? First, I would encourage all healthcare AI developers and vendors to involve ethicists, clinical informatics experts, and operational experts from the inception of product development.

Second, I would recommend that healthcare AI be subjected to a higher level of scrutiny. Because AI is often “built in” by a trusted business partner and easily implemented, objective evaluation may be waived. As data science techniques become increasingly complex, serious consideration must be given to multidisciplinary oversight of all AI in healthcare.

David Nash, MD, MBA, is founding dean emeritus and the Dr. Raymond C. and Doris N. Grandon Professor of Health Policy at the Jefferson College of Population Health. He serves as special assistant to Bruce Meyer, MD, MBA, president of Jefferson Health.