Source: artificial intelligence
Alterovitz is also looking for other uses to help VA staff members make better use of their time and help patients in areas where resources are limited.
“Being able to cut the (clinician) workload down is one way to do that,” he said. “Other ways are working on processes, so reducing patient wait times, analyzing paperwork, etc.”
Barriers to AI
But Alterovitz notes there are challenges to implementing AI, including privacy concerns and trying to understand how and why AI systems make decisions.
Last year, DeepMind Technologies, an AI firm owned by Google, used VA data to test a system to predict deadly kidney disease. But for every correct prediction, there were two false positives.
Those false results may cause doctors to recommend inappropriate treatments, run unnecessary tests, or do other things that could harm patients, waste time, and reduce confidence in the technology.
“It’s important for AI systems to be tested in real-world environments with real-world patients and clinicians, because there can be unintended consequences,” said Mildred Cho, the Associate Director of the Stanford Center for Biomedical Ethics.
Cho also said it’s important to test AI systems with a variety of demographics, because what may work for one population may not for another. The DeepMind study acknowledged that more than 90 percent of the patients in the dataset it used to test the system were male veterans, and that performance was lower for females.
Alterovitz said the VA is taking those concerns into account as the agency experiments with AI and tries to improve upon the technology to ensure it is reliable and effective.
This story was produced by the American Homefront Project, a public media collaboration that reports on American military life and veterans. Funding comes from the Corporation for Public Broadcasting.