Medical ‘Live-Diagnostics’ made possible by Deep Learning

In this February 2018 article published in Nature-Biomedical Engineering, through a partnership from Google’s Healthcare team and Stanford Medical Center — a proof of concept that will likely revolutionize medical diagnostics.

For the purpose of explaining the paper I recommend doing a brief reading about Deep Learning, Machine Learning, Neural Networks, Artificial Intelligence, Artificial Intuition, and so forth. I will leave these explanations to a real physicist or computer scientist. I am a Physic•ian (non-physicist) — excited about the future of medicine being created by this accomplishment.

What is so interesting is that Deep Learning enforces us to take a look at things in a new perspective to construct a framework for modeling. This construct and all of it’s inherited associations are largely un-interpretable to a human being. As uncomfortable as that makes you, we are now able to produce predictions based on generalized data — in real time, without human interpretation.

This methodology is the exact opposite of how we normally do things in medicine. In order to check a patient’s blood pressure, physical measurements are taken directly through a device, the sphygmomanometer. We use the age old constructed methodology: correlating a change in the sound of blood flowing through your arteries with the pressure measurement at specific times.

The power of Deep Learning to produce quantifiable predictions (i.e. blood pressure from retinal image) is based on the interpretation of the previous results (i.e. blood pressure from sphygmamonometer). Obviously, this is something that will need to be accepted. It is anyone’s guess, how long FDA approval could be: 5, 10, or 20 years down the line?

A green heat map overlaying where the model is using information from the image is used for representation. This is a generalization and in the end we can only ensure it’s accuracy by comparing it to the standards of conventional methods.

The Nature Article refers to ‘cardiovascular risk factors’ (age, smoking history, blood pressure, etc.) — not mentioned is the most widely used application for these in general practice. in calculating the ASCVD Risk Score. Primary-care physicians and cardiologists use this method to determine treatment recommendations; like starting a medications such as Aspirin or Statin drugs. These preventative treatments save lives by providing a standard for treatment to combat a high calculated risk of having a serious heart attack within the next 5 and 10 years.

One argument I could see is that having cardiovascular risk index (human-produced) based on another prediction (AI-produced) of the variables that constructed it. Curious how that plays out.

Structured for Deep Learning.

To ‘teach’ artificial intelligence to make a prediction is almost universal at this point — the same concept applies to how Facebook trains AI to predict which users are Russian trolls. Built upon providing a ‘training set’ — including the input data (retinal images in this case) along with the pre-determined output (diagnostic results in this case). The machine learning script is applied to this data set and after an intensive computing process (beep bop boop) it builds a model that reveals a relationship about the retinal images and conventionally supplied diagnostic value (average glucose, smoking history, etc.) Accuracy will be determined by multiple factors including size and quality of the data. The last step is validating accuracy of the model by analyzing a separate data set containing new retinal images, without revealing the result that was determined by conventional methods.

Everything is dependent upon the input data: how it’s structured, and what relationships exist within this structured data — now can be revealed.

A deep learning model has been ‘trained’ and ‘validated’ for accuracy is now deploy-able to analyze new images. I will describe a commercially available product capable of doing so.

This is Amazon Web Services new hardware device ‘Deep Lens’ — a deep learning enabled camera priced around $250. Using the lens attached to a powerful on board computer, ‘Deep Lens’ is capable of deploying ML models on new data and producing the results in real-time, something Amazon Web Services has referred to as ‘Live-Analytics’. More about ‘Deep Lens’ can be found here, including sample projects which deploy pre-built models to perform tasks such as object detection and facial recognition.

Swap out your deep lens’ lens for a retinal imaging lens powered by an on board computer capable of deploying the model in real time ..instantly predict ASCVD (atherosclerotic cardiovascular disease) Index for 5 and 10 year risk of serious heart attacks as well as treatment reccomendations.

The logistical implications — providing an accurate and reliable basis for diagnosis so that treatment can swiftly move forward. My human prediction is that this will be utilized as ‘Live Diagnostics’ — ultimately allowing human physicians more time for humanistic care.

Source: Deep Learning on Medium