Original article was published by Christopher Tao on Artificial Intelligence on Medium
Two Major Barriers to Apply AI in the Medical Industry
Some considerations of AI after the COVID-19 pandemic
IT industry is probably one of the industries that are affected relatively less than others during this COVID-19 outbreak. We’ve seen the acceleration of Zoom.us and other IT companies that has been actually “benefited” by this disaster. In fact, other than the tools for communication, there is another area of IT that used to be hot but become even more popular and controversial. That is Artificial Intelligence.
As many countries have suffered or being suffered from the shortage of medical resources, especially medical professionals, more and more start to pay attention to AI. This is because the born of AI meant to release our hands from some manual and repeated jobs.
AI Applications during the COVID-19
One of the most popular applications of AI in the medical industry is reading the tomography. When the virus outbreak in Wuhan, China, the shortage of resources has revealed very soon. This large city with a 10 million population has suffered very heavily. However, in March, an AI company called YITU has released an AI-Powered intelligent evaluation system for reading chest CT automatically . This has released a huge amount of burden from the doctors’ shoulders by assisting the diagnosis of the COVID-19 with very high accuracy. Afterward, this system has been utilized in many other cities and assisted to diagnose suspicious cases accumulatively 100,000 persons by the end of March.
China is not the only one that involved AI to facilitate combat against the virus. In fact, almost all around the world are doing so. For example, a collaboration between Microsoft and La Trobe University was also conducted and produced decent deliverables . Of course, the AI applications in computed tomography is even more popular in the US. “The role of the radiologist will be obsolete in five years”, said Vinod Khosla . Well, although I don’t support that point of view, it can be seen that AI is competitive enough in this area.
It looks like AI is going to be more and more popular in the medical industry. The pandemic is also considered to accelerate its growth. However, in this article, I will introduce two major barriers to the application. Rather than keep saying “there are ethical concerns” like most of the other articles, I’ll list the specific reasons and discuss them in detail.
Why we need AI for assisting the radiological diagnosis?
You might think that the pandemic is a kind of “Black Swan”, which does not happen frequently in our history, nor the future. Regardless of the correctness of this statement, there are definitely reasons that we should involve AI assistance in radiology clinics other than just ease the burden on the medical system. That is, the accuracy or AI algorithm in diagnosing complicated cases is actually better than medical professionals. There are a considerable amount of research shows that in some certain areas, the performance of AI algorithms could surpass a single medical professional.
Although it is generally not supported that AI will replace radiologists , we have to admit that it can assist radiologists to improve their performance and reduce the diagnosis time significantly .
Besides the performance improvements, there are also some other reasons for introducing AI-assisted radiological clinics. For example, by reducing the diagnosis time and release the burden from radiologists, the price might go down so that some related medical services such as breast cancer screening can be easily rolling out to people in poverty.
Another example, AI can also assist in prioritizing the patients. So, it is expected that the patients with higher risks can be treated more urgently, and consequently more lives could be saved.
We have seen so many positive aspects of AI applications in the medical industry, but why it is so difficult to let it grow exponentially? The major barriers are as follows.
Barrier 1: The Nature of Black Box
Black box, meaning that it can do well, but we don’t know how it can do so well. Therefore, it just like a machine in a black box. You put some materials in, and then it will output something you want. All the internal procedures are hidden to the users.
Not all machine learning algorithms are black boxes, such as the Decision Tree algorithm. However, it is unfortunate that the most effective approach is not classic machine learning, but deep learning.
Deep learning has shown that it can outperform humans in many areas that humans used to be confident with, such as the go chess that has been cracked by Alpha Go a few years ago.
Our science has grown for several hundred years. The law of causation has always been the cornerstone. That means if we can’t explain why it works, it is going to be difficult to convince people to trust it.
Well, suppose if AI is not a black box, and it can sometimes surpass professionals in some area. What will you want to know? Yes, we definitely want to “learn” the “knowledge” from the machine learning model back to us. Of course, you and I are not the only persons having this idea. There is a lot of research conducted in learning back from the trained AI models, such as the one has been done by Stanford University . More than 100,000 chest X-ray was utilised to train a deep learning model and then researchers were tried to get some clues back from this model. However, nothing useful has been found. The deep learning model can only teach itself, and learn by itself. Imagine that if someone tells you that the life span of your heart is mostly decided by your shoulder blade, which almost no medical literal supports that. Then, do you trust this person even though this person can very accurately predict when heart failures would happen for most of the other people?
Basically, by having a black box AI algorithm for diagnosing purposes, the major risk is that we don’t know when and how it gonna make any undiscoverable but serious mistakes.
Barrier 2: Responsibility Ownerships
In modern society, the medical industry is very mature and is regulated by numerous systematic laws and policies well-rounded in most of the developed countries. However, the application of AI might mess it up.
When a doctor made a serious mistake because of significant neglect, very likely the violation of the liability that is from the law will reduce the hurt of the victims. But what if it is an AI algorithm which is just a piece of code? You might think the company that developed that should have the responsibility to test it and make sure it doesn’t make any ridiculous incidents. However, that might not be the case.
Not only human can be sick, the AI model can be sick, too. As we know that it is not going to be 100% reliable in terms of consistency. Of course, medical professionals cannot be 100% reliable as well, which is acceptable, but professionals are consistent. We can expect that a novice might make mistakes and a professor will make fewer mistakes. Along with the growth, the novice will become more and more experienced and making fewer mistakes too. But that’s not the case for AI, in fact, AI algorithms may produce more mistakes as the time going, which is so-call “drifting”.
The machine learning models are trained by a certain amount of labeled data. Mathematically, we say that the dataset we used for training the model has a certain distribution. It is expected that the trained machine learning model can only work well if the new data that it is going to predict has the same distribution. Specifically, any of the following changes in practice could cause serious failures of an existing machine learning model.
- The clinic/hospital changed its management system
- The machine learning model is utilized by a different hospital
- The machine learning model is utilized in a different area of the country/world.
- The disease that the model is designed for has changed, such as it is caused by a virus that has mutated.
When these happened, should we blame the organization that developed this model or the medical institute “misused” the model? It’s very controversial and difficult to be regulated.
In this article, I have shared some of my opinions regarding the AI application in the medical industry. Why it is so popular but why it reached its bottleneck because of socialized reasons.
Of course, the barriers of AI in the medical industry are not expected to be overcome in the short-term. However, as a person who works in the data area, I am generally optimistic about AI/ML.
It can be seen that some changes might happen before the barriers are overcome.
- AI will not be able to replace medical professionals, but the medical professionals who are assisted by AI might replace those who aren’t.
- AI in the medical industry may develop faster in developing countries because it can reduce the cost of some medical services and improve efficiency.
- AI may change the relationships between the patients and the professionals. It is very likely the AI algorithms might be acted as the “dispatching” role to improve the efficiency of the entire medical system.
Therefore, I would support AI be utilized in the medical industry that could improve efficiency as a whole. This could be a better application than trying to utilize it to “replace” some roles, which sounds fascinating but not the correct objectives over the next decades.
 YITU Launches AI-Powered Intelligent Evaluation System of Chest CT for COVID-19
 La Trobe University models coronavirus infected lung in 3D
 Here’s why one tech investor thinks some doctors will be ‘obsolete’ in five years
 Why AI will not replace radiologists
 AI improves efficiency and accuracy of digital breast tomosynthesis
 How Can Doctors Be Sure A Self-Taught Computer Is Making The Right Diagnosis?