A Brief History of Deep Learning | A High Schooler’s Guide To Deep Learning and AI

Original article was published by Ada Tur on Artificial Intelligence on Medium


A Brief History of Deep Learning | A High Schooler’s Guide To Deep Learning and AI

Welcome back! I’m glad to see that you’re interested in continuing on this journey with me! However, before we get into all of the fun stuff, I thought that it would be useful for you all to understand where everything started. AI requires a social aspect, and therefore, it is necessary for you to understand the key figures in the field and the advancements made by them. I will go over a timeline of deep learning, the AI winter, and the gurus of AI.

The Roots Of AI

As most of you may know, during the Second World War, Alan Turing, a British computer scientist, and his team created a machine that cracked the code to one of the most complex codes used by Germany, known as ‘Enigma’. Turing’s machine, called the Bombe machine, inspired Turing to believe that machines could become intelligent and process information similarly to human beings. Soon after, Turing developed what he would call the ‘Turing Test’ in 1950. The Turing Test was a test where an evaluator would speak with an AI and a human being and ask questions that would allow them to decide which participant was the human being. In most cases, the AI would purposefully give incorrect answers to certain tasks like math problems in order to make it more difficult to distinguish between the human and AI. Turing developed the test in order to create a circumstance where an AI is mimicking human activity on an incredibly high level. His test was one of the first ideas for how an AI should be able to act in order to be a ‘good’ AI.

Source: GeeksForGeeks

The First Developments In AI

About 6 years later, in 1956, artificial intelligence began to gain traction. John McCarthy, typically credited as “one of the founders of the discipline of AI”, coined the term “artificial intelligence”. He was one of the first scientists to begin exploring and developing artificially intelligent systems, and McCarthy even designed the Lisp programming language family, which became one of the most popular AI languages at the time. In the same year, Allen Newell, J.C. Shaw, and Herbert Simon developed an algorithm that could prove theorems, called the Logic Theorist. This creation was the very first truly artificially intelligent program. All of the scientists, including McCarthy, proposed their ideas and creations at the Dartmouth conference, now recognized as the birth of AI.

Source: Dartmouth Conference

The Summers And Winters Of AI

The first winter of AI came from the mid 70’s to 80’s, after a series of developments in AI. While the time period directly before it was considered as the first summer due to the large amount of advancements, AI failed to continue to keep people’s hopes up for success. As a result, major companies that were beginning to invest in artificially intelligent technology began to pull out and focus on other solutions. This drought lasted up until the second summer of AI, which went on throughout the 80’s. During this period, a lot of advancements were made, mainly in speech processing. However, because of processing, memory, and data limitations, AI underdelivered and brought down people’s hopes once again, thus beginning the second and longest winter of AI. One of the biggest things to happen during the winter was the creation of LSTMs, or long short-term memory models, by Hochreiter and Schmidhuber in 1997. Their paper on LSTM’s got cited a total of 13 times in the following 3.5 years. Nowadays, it gets cited 13 times in 3.5 days, making it the most cited deep learning paper of the 20th century.

Source: Myself [All information listed is subjective and may not be completely accurate due to interpretation]

The Latest Advancements In AI

After the 2000’s, with the advancements in memory, GPU, and disk space, few groups of people were eager to push the boundaries of what AI could do. One of these teams, led by Yann LeCun, proved to the world that deep learning was in fact working more effectively than other models. They demonstrated this with the success from their convolutional neural network model. Along with Yoshua Bengio and Geoffrey Hinton, they started the new era of deep learning, which resulted in them receiving the Turing award in 2018. Nowadays, CNN’s have become an established approach for image processing and deep learning advances have accelerated research in other fields, such as speech recognition and natural language processing, amongst countless others. One big advantage of deep learning is that scientists do not need to worry about feature extraction for their models, instead work on constructing a strong architecture for their deep learning tasks.

If you would like to view their talk during the 2018 Turing Awards, I link it below:

This is my interpretation of the history of AI and deep learning, that I attempted to summarize briefly. It is important to note that many resources online tend to be inaccurate. Additionally, different perspectives on advancements and contributors exist, and that it is very difficult to truly know a concrete history. Nonetheless, I hope this timeline proves helpful in giving you some context for the lessons to come. In the next article, I hope to begin showing you simple algorithms that will be a gateway into deep learning. I hope to see you there!