Deep Learning and Solving Aging

Source: Deep Learning on Medium

Deep Learning and Solving Aging

Photo by Rod Long on Unsplash

By sheer serendipity, I stumbled upon David Sinclair (@davidasinclair) information theory of aging. Sinclair has spent his career in the pursuit of human longevity (i.e. seeking to cure the disease known as aging). In his recent book (“LifeSpan”), he writes about an intriguing conceptual framework as to why all biological system tends towards aging.

I am going to discuss how our new discoveries about the nature of deep learning can provide a better understanding of Sinclair’s model. What I mean by this is that deep learning has given us new insights (of a computational nature) on how complex learning systems work. Can we take these new discoveries and apply them to this new theory of aging to gain new insights? To be clear, I’m not applying deep learning to solving aging, although that indeed is a promising endeavor. I am taking a model of how deep learning works and applying the same model to aging.

So let us begin. Sinclair’s model of aging is as follows. A stem cell differentiates into a specialized cell in the body. This cell is regulated by the epigenomic information that wraps the DNA. In effect, only a limited amount of DNA information is expressed in the specialized cell. Sinclair conjectures that aging is a result of the deterioration of the epigenomic information over time.

The specialized cell is able to continually repair itself, but repair cycles will inevitably accumulate errors. These errors are reflected in the specialized cell unable to perform its original function. It is as if the cell has forgotten how to correctly function.

David Sinclair proposes that to slow aging, one has to exercise these repair mechanisms. He notes that bacteria (and possibly other kinds of cells) operate in at least two modes. In one mode, in an environment of rich resources, a cell tends to favor growing itself. In an alternative context of scare resources, a cell hunkers down and activates processes that preserve and repairs itself. Sinclair recommends that it is important to activate this preservation and repair mechanism. This is known as hormesis. This is the biological phenomena that low dose exposure to harmful toxins can have beneficial effects. The mechanisms of cells to function in these two modes is likely a very ancient mechanism and are present in all biological cells.

When we consider the body as being a learning system, then negative perturbations expose the body’s cells to learn how to compensate for this perturbation. The cells that do get slightly damaged in these processes, learn how to repair themselves. In the absence of these ‘teachable moments’, a cell eventually forgets its skillset and eventually becomes less capable in making effective repairs. It’s like a kind of memory that requires reinforcement in spaced intervals so that it can keep itself from forgetting.

Sinclair recommends exercise, caloric restriction, exposing oneself to extreme cold and heat. The conjecture here is that one exercises the molecular repair mechanism so as to keep it efficient and thus robust. It’s the same reasoning why we have fire drills in buildings with lots of occupants. Sinclair further practices the consumption of certain molecules to enhance the activation of the ‘longevity pathways’ that triggers this cellular behavior. These include the daily consumption of NMN, resveratrol (in the morning) and metformin (at night).

There are molecules that plants and fungi produce that have very ancient effects on our functioning. Psychedelic drugs as an example trigger a molecular change in the working of our neurons to effect hallucination. In similar ways, chemicals that come from plants in harsh environments (ex: resveratrol and plants with color) perhaps trigger sympathetic cellular behavior as if the body is in a harsh environment. It is ironic that to live longer in this modern world, we have to make our bodies believe that we are living in a harsh environment. With the increase in convenience and thus the loss of hardship, we now have to know how to conveniently simulate hardship!

Sinclair likens the old epigenome as a scratched surface of a digital video disk. That is, parts of the disk become unreadable and thus its interpretation becomes flawed. So as the epigenome repairs itself, it inadvertently makes errors and these errors reflect itself in the expression (or lack of) of the genes.

In an ambitious experiment to test the hypothesis of the information deterioration of the epigenome, Sinclair applied thee of four Yamanaka factors to slightly perturb the epigenome. The Yamanaka factors O,S,K,M turns a differentiated cell back into a pluripotent stem cell. Sinclair applied just O, S and K to perturb the epigenomic state of a cell. To his surprise, this perturbation moved the cells back to a younger version of the cell.

How do you know that a cell is a younger version of its current state? Apparently the methylation levels of DNA can be used as a measure of the age of a cell. This is known as the Hovarth epigenetic clock. Sinclair’s experiment was applied to a damaged mouse eye, and it was shown that damage was repaired to a state of a younger eye.

It’s important to note that the application of the Yamanaka factors perturbs the cell in the direction that leads to a pluripotent stem cell. This perturbation, however, may be gradual enough that the original developmental instructions (with the presence of certain molecules) can transform the cell back into the original differentiated state of a young cell. Other regulatory molecules must be present for this regeneration to happen. Thus the discovery is that there are regulatory molecules that can move the state of the epigenome in either direction of aging. David Sinclair’s “information theory of aging” is a convincing theory as to why we age. We age because our cells forget the original intentionality that is encoded in the epigenome.

It’s intuitive as to why an organism eventually stops growing. Uncontrolled growth can be detrimental to the sustainability of an individual. Therefore, it is part of biological systems to eventually shut off the mechanism of the fountain of youth. I don’t know enough about axolotls, keep forever young, are able to maintain this state.

Geoffrey West’s book “Scale” ponders the question of why we age. He doesn’t quite get to a model but discovers another universality in that a mammal’s average lifespan. A mammal’s lifespan is constant relative to the number of beats of its heart. Mice that have higher heart rates live much shorter lives than humans. Larger animals like elephants and whales must have slower heart rates. The universality (or power law) to the physical scaling constraints. Biological systems scale in size in a manner that is reminiscent of trees and this accounts for the universality.

Source: http://tuvalu.santafe.edu/files/Bus_Complexity_Conf_2011.pdf

The larger the animal the slower its heart beats. It takes longer for blood to flow through a larger animal. The slower an animal’s heartbeat, the longer it can live. Alternatively, there is a fixed supply of heartbeats that is available to keep an animal alive. It reminds me of the number of battery cycles in a rechargeable battery before it can’t charge well anymore. What makes batteries degrade over time? Well, I thought we knew why, but this article (Sept 2018) says:

“Despite batteries being so ubiquitous, people don’t know how they work,” Chueh says. “It’s kind of a miracle batteries work at all.”

We don’t even know why batteries age!

But let’s get back to this abstract notion of forgetting and its relationship to complex learning systems and how information is encoded.

The nature of how DNA encodes information is still a very complex problem. We had originally imagined that we could find sections of DNA that uncontroversially identifies the instruction sequences and activates the generation of traits. The issue is that this conjecture is now in question with what is known as the ‘Omnigenic’ model.

Complex learning systems have this ‘uncertainty principle’ where the encoding incorporates both robust and random features. The oddity here is that one cannot remove the random encoding features without a detrimental effect. This uncertainty principle is also what we find in artificial deep learning systems. That is why disentanglement seems to be hopelessly difficult to achieve (i.e. training towards robust features). This characteristic of dispersing information across all the neurons leads to a more robust information system. This phenomenon may be related to what we see in how DNA encodes traits. The Omnigenic model proposes that this information is also dispersed across the entire DNA. For DNA for certain traits can be encode over thousands of sites. This encoding of DNA may exhibit a kind of holographic encoding that we also find in entangled deep learning models.

So what new insights can we find if we take this Omnigenic model of DNA information encoding and apply it to Sinclair’s model of epigenomic information? Sinclair’s interpretation is that it is like a scratched DVD. If you’ve ever watched the contents of a scratched DVD, the scratches have a catastrophic effect on sections of the original video. Holographic encoding is more robust to errors. When one takes a hologram and cuts it in half, one can still retrieve the original information albeit in a lower resolution. But the errors are not so catastrophic as to make the information unusable. The point is here is that the dispersion of information across space leads to robust information retrieval.

The benefit of digitization (found in both DNA and DVDs) is that it permits accurate duplication of information. This is orthogonal to the value of information dispersal methods. Information dispersal leads to robust interpretation of information in the transmission of error in the original encoding.

To illustrate this better, let me employ a generative model, specifically a stylistic GAN to illustrate the mechanics of the epigenome. The stylistic GAN has an intriguing structure in that it learns an encoding layer (see: the blocks labeled ‘A’) that it employs to constrain the generation of images.

A Style-Based Generator Architecture for Generative Adversarial Network

One can think of the epigenome in this manner. That is, evolution learns a set of gene expressions that are encoded as the epigenome. This epigenome subsequently controls the generation of proteins in a manner analogous to the StylisticGAN. The effect of a perturbed epigenome is that the styles of the generative model diverges from the original.

An interesting property of deep learning generative models like GANs is that they are adept at filling in the blanks. Here’s one example of photo reconstruction:

https://news.developer.nvidia.com/new-ai-imaging-technique-reconstructs-photos-with-realistic-results/

I suspect that the generative models that read the epigenome are similarly adept at reconstructing models despite errors in the original encoding.

What is indeed compelling with these recent discoveries is that we are developing a new theory of information emergence that might explain the not only intelligence but the nature of aging:

Further Reading

https://www.amazon.com/Lifespan-Why-Age_and-Dont-Have/dp/1501191977/ref=sr_1_1?keywords=lifespan&qid=1576762967&sr=8-1