Source: Deep Learning on Medium
Bactoneurons: Artificial Neural Networks made from engineered bacteria
The field of Artificial Intelligence (AI) has seen some huge advances in 2019, such as Recurrent Neural Networks identifying Deepfake Videos and predicting the smell of molecules. The new decade promises no less! In early January 2020 itself, a paper was published on the implementation of… a single layer artificial neural network! A bit anti-climatic because you’ve seen much more complicated neural nets last year? Wait, there’s more: compared to its peers, this specific neural net was created by using, as neurons, living engineered bacteria! Let us dive into this straight-out-of-the-future research before taking a peek at what this could entail for AI as a whole.
Creating living AI
Our understanding of the natural neural networks in our brains are far from complete. If anything, the more we learn about them, the more we realise we don’t know about them! But there is one thing we know from first principles, and it is how Artificial Neural Networks (ANN) work. It was only a matter of time before the idea was tried: how about we implement ANNs in a way closer to real neural networks, using living cells? And this is exactly what scientists at the Saha Institute of Nuclear Physics have done! For those interested in taking a look at the paper, here it is.
As with all new concepts, we start small. In this case, single-layer architectures were created with simple purposes like a 2-to-4 decoder and a 2-to-1 multiplexer. Taking the decoder example, for those unfamiliar with the term, a N-to-2N decoder is an electronic component that simply maps an input of N bits to an output of 2N bits in a one-to-one fashion.
A simplified description of the 2-to-4 decoder is shown in the figure above, and its behaviour is shown in the truth table below. A truth table is simply a table showing, in each row, the input to the decoder and the corresponding output one should expect from this decoder. For example, in the first row in the table, if both inputs are 0 in the above figure, then D0 is 1 and all other outputs are 0.
To show the simplicity of the task, here is the code simulating a 2-to-4 decoder implemented in Python:
We now know what was done: a simple neural network made from bacterial cells, trained to learn very simple functionality. But how was this done? Brace yourselves, ladies and gentlemen, we are headed for a crossover between biochemistry and deep learning!
Bactoneurons are the building blocks of ANNs made from living bacterial cells.
First of all, suitable chemical alternatives had to be found for the simple ‘1’ and ‘0’ in electronics because, obviously, living cells detect changes in their environment through variations in the concentration of substances around them and not via ones and zeroes. Neurons as we know them in ANNs, were replaced by engineered bacteria named bactoneurons. The inputs, which in the decoder figure are A and B, were two distinct chemicals (if scientific names don’t scare you, they are anhydrotetracycline (AT) and Isopropyl β- D-1-thiogalactopyranoside (IP)). The outputs, which in the same figure were D0, D1, D2 and D3, were now four fluorescent proteins (mKO2, E2-Crimson, mTFPI and mVenus). The activation functions, which as an example in an ANN would be Rectified Linear Unit (ReLU) or Hyperbolic Tangent (tanh), were implemented by engineering the genes of the bactoneurons to create “gene circuits” to act similarly to their software counterparts. A gene circuit, or more formally a synthetic genetic circuit, is a human-designed molecular genetic arrangement which follows engineering design principles and works inside living cells. The weights and biases that the neural network would need were adjusted by creating new promoters.
Put (very, VERY) simply as explained here, a promoter is a region in the bacterial DNA with a functional purpose. Now you’ve seen all the components needed to build the neural net. As I am sure there are many who skipped the long names, here is a table to summarise how the bacterial neural net was built:
What this means for the field of AI
While this technology is still is in infancy, the fact that AI can be implemented with living cells opens up a whole new direction for AI-based hardware. While I hope that in the future, if I say “Alexa, play my most recent Spotify playlist”, it’s not a gooey AI-enabled organ that plays it, I do believe there is potential for certain applications of this technology, although saying that all AI will be made from living cells may be a stretch. If indeed this is the direction in which we are headed, there’s loads to do and the progress will be very interesting to follow. For a start, there will be a need to implement more complex neural nets and their training and testing algorithm counterparts with bacterial cells.
On the other hand, the more progress there is in general AI, the higher the chances of a backfire. I am not saying that we will end up in a Terminator scenario; anyone savvy enough in Machine Learning knows that the strength of AI does not lie in doing everything better than us, but in being much better than us at a specific task. Every day however, with every progress in our AI algorithms, brings us closer to Artificial General Intelligence (AI that can learn any intellectual task that a human being can). To illustrate where this could lead us, I shall end this article with a quote by Eliezer Yudkowsky in Artificial Intelligence as a Positive and Negative Factor in Global Risk, providing food for thought:
AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.