Paving the way for personalized medicine — Deep Learning and Pharmacogenomics

Source: Deep Learning on Medium

What’s currently being done:

Patient stratification:

The idea of patient stratification is to able to cluster subgroups of data within a larger dataset, to determine which patient data to use. The goal is to help reduce the time spent on choosing the correct data.

Diagram of clustering patient data

This process is complicated because it involves fusing biomedical, demographic, and socio-metric data to categorize the patients. The problem is the number of variables that we have to deal with, which will require lots of feature analysis and extraction (to make sure we pick the right data that we need).

Deep learning has the potential to learn useful data representation that can help with treatments or predictions. We want to be able to design models that are capable of finding patterns that are sparse and complex.

Current solutions being used would be Deep Patient, which uses SAEs, a semi-supervised technique that can predict “final diagnosis, patient risk level, and outcome (e.g. mortality, re-admission)”.

(Researchers are also looking into other semi-supervised techniques like Generative Adversarial Networks (GANs), which could possibly help with understanding this complex data.)

Drug discovery and development:

The approach that has been used for a while focused on the process after proteins are replicated (synthesized) by ribosomes (called post-translational modifications). There have been hundreds of these identified proteins that later become larger and complex proteins. Some of these larger proteins have demonstrated the potential to be druggable targets.

However, we can use deep learning approaches to help speed up the progress for finding the right protein candidate. The goal would be able to test a compound/drug by simulating the drug in a virtual human system.

The biggest challenge holding this field back is being under-funded in bioinformatics.

Toxicity prediction of certain chemicals/drugs:

The push for new methods to test for toxicity prediction started a challenge called Tox21 Data challenge. Given an input of 12,000 different chemicals and drugs, a deep learning model has to measure the result of 12 different toxic effects.

The deep learning model, DeepTox was able to achieve the highest performance in this challenge and was able to demonstrate the benefits of using a multi-task network over a single-task.

Models like DeepTox used a similar architecture to the DeepAOT family, which is comprised of methods like regression, multi-classification, and multi-task networks.

What’s really interesting about DeepAOT and DeepTox is that it’s not just limited to detecting oral toxicity, but also toxicity induced into more complex systems.