Original article was published on artificial intelligence
- Artificial intelligence has the potential to help address global health challenges, but stakeholders must promote rapid and equitable data access and establish ethical standards in order to use the technology appropriately, according to a review published in The Lancet.
The review, conducted by Nina Schwalbe, MPH, adjunct professor in the Heilbrunn Department of Population and Family Health at the Columbia University Mailman School of Public Health, and Brian Wahl, PhD, assistant acientist in the Department of International Health at the Johns Hopkins Bloomberg School of Public Health, included research articles published between 2010 and 2019.
The team found that although artificial intelligence could help support population health in resource-limited settings, much of the AI-driven intervention research in global health does not describe ethical or regulatory considerations required for widespread use.
“Especially during the COVID-19 emergency, we cannot ignore what we know about the importance of human-centered design and gender bias of algorithms,” said Schwalbe, who is also the Principal Visiting Fellow at United Nations University – International Institute for Global Health.
“Thinking through how AI interventions will be adapted within the context of the health systems in which they are deployed must be part of every study.”
Data access will play a critical role in ensuring the ethical and appropriate use of AI for global health, the authors noted. Open access to diverse datasets will be particularly important for researchers developing machine learning tools for health interventions.
“Enabling access across borders will require new types of data sharing protocols and standards on interoperability and data labeling. This global movement could be facilitated by an international collaboration so that data are rapidly and equitably available for the development and testing of AI-driven health interventions,” Schwalbe and Wahl said.
In addition to data access, the team emphasized the need for regulated health technology assessments for AI-driven health interventions. These structured assessments would ensure healthcare organizations understand the potential risks and benefits of AI tools.
“Standardized methods for these assessments, including the extent to which these interventions add value over current standards of care, are urgently needed. Such methods should show how well AI tools work outside study settings and highlight related health system costs, including unintended clinical, psychological, and social consequences,” the authors wrote.
When developing and deploying AI solutions, institutional structures will also have an important role to play, Schwalbe and Wahl said.
“Such structures include appropriate regulatory and ethical frameworks, benchmarking standards, pre-qualification mechanisms, guidance on clinical and cost-effective approaches, and frameworks for issues related to data protection, in particular for children and youth, many of whom now have a digital presence from birth,” the team said.
“The impact of AI tools on gender issues is another important consideration and an area in which global guidance is currently lacking.”
Schwalbe and Wahl also noted that only a few studies reported on the usability or acceptability of AI tools from the patient or provider perspective, which is a critical element in AI interventions.
“Human-centered design, an approach to program and product development frequently cited in technology literature, considers human factors to ensure that interactive systems are more usable. Human-centered design is acknowledged as an important factor for the development of new technologies in low- and middle-income countries,” the authors wrote.
The team noticed a lack of randomized clinical trials in their literature review as well. These trials help establish clinical efficacy in low- and middle-income countries, and will be necessary for AI to improve health outcomes in these places.
“Given the challenges associated with conducting random clinical trials for new health technologies, new approaches such as the Idea, Development, Exploration, Assessment, and Long Term (IDEAL) follow-up framework recommended for the evaluation of novel surgical practices, could serve to provide relevant learning,” the authors said.
“This framework provides guidance on clinical assessment for surgical interventions, in the context of challenges that make clinical trials difficult, including variation in setting, disparities in quality, and subjective interpretation.”
Implementation research will also be critical for the widescale deployment of these novel technologies, Schwalbe and Wahl said.
“Assessing implementation-related factors could help to identify potential unintended consequences at an individual and system level of AI interventions,” the authors said.
With these recommendations, the team expects that new AI solutions will help promote and support global health outcomes, particularly in the wake of the coronavirus pandemic.
“In the eye of the COVID-19 storm, now more than ever we must be vigilant to apply regulatory, ethical, and data protection standards. We hold ourselves to ethical standards around proving interventions work before we roll them out at scale. Without this, we risk undermining the vulnerable populations we are best trying to support,” Schwalbe concluded.