The Origin of AI.

Original article was published by Ryngo Brody on Artificial Intelligence on Medium


The concept of Artificial Intelligence, a man-made machine that can not only replicate Man, but even surpass him, is one that has become one of the most cherished archetypes in science fiction. From AM in I Have No Mouth and I Must Scream, to Lieutenant Commander Data from Star Trek, characters based on AI have both terrified and fascinated us for decades. This article is an attempt to understand the beginning of artificial intelligence as an abstract concept, it’s evolution into more practical terms, and the start of real-world applications of these concepts.

Precursors to AI

Many parallels to the idea of a man-made intelligence, or an entirely artificial being that can think, can be found in several mythologies throughout the world. One of the earliest examples is Talos the bronze giant, mentioned by the poet Hesiod in 700 BCE. He was described as a giant made of bronze, built by the Greek god of invention and blacksmithing, Hephaestus. Talos was made to protect Crete from invaders. He was described as having a tube running from his head to his feet, carrying a life-giving fluid named Ichor. In another story, the sorceress Medea managed to defeat Talos by removing a bolt at his side and releasing the Ichor. Interestingly, several greek myths seemed to heavily imply that such beings may only cause misery and destruction, the most infamous example being Pandora, who was described as an artificial, evil woman sent to earth by Zeus to release the box of miseries.

In Judaic Myths, Rabbi Judah Loew ben Bezalel learned to build Golems, silent automatons from clay, who would be given an objective and set free to accomplish it. Alchemists such as the Paracelsus, Jābir ibn Hayyān, and Johann Wolfgang von Goethe described the creation of Homunculi, which are artificial human beings that are created through Alchemy. One of the most famous examples is The Brazen Head, which was claimed to have the ability to answer any question they were asked.

In the late 18th century and the 19th century, ideas about Artificial Intelligence and sentient machines further evolved. The most notable example is Mary Shelley’s Frankenstein. However, there were also several strides made in the scientific and philosophical fields in terms of AI, one of the earliest examples being Samuel Butler’s “Darwin among the Machines”, where Butler theorizes that:

“Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.”

And then goes on to proclaim:

“War to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of his species. Let there be no exceptions made, no quarter shown; let us at once go back to the primeval condition of the race.”

As is evident so far, throughout cultures, people have treated the concept of Artificial Intelligence with equal levels of scrutiny and wonderment. Every declaration of its superhuman abilities and intellect tends to be followed by a prediction of it causing death and destruction, as well as the eradication of the human race.

The mechanization of human thought and reasoning

The cornerstone of Artificial Intelligence is the ability to simplify the process of logic and reasoning into processes that are objective, repetitive, and algorithmic in nature, such that a mechanical machine can be constructed to replicate these processes and obtain the required results. While this is a rather reductive description, it summarizes the basic requirements for starting to work on AI systems.

The study of formal reasoning, and the development of structures for mechanical reasoning is one of the oldest fields of research in existence, with several structured methods being created in the first millennium BCE. While Aristotle is considered to be the father of deductive reasoning, there are several notable philosophers who have contributed extensively to this field, like Euclid, al-Khwārizmī, William of Ockham, and Duns Scotus.

In the 14th century, Ramon Llull developed what are knows as Logical Machines, devices that had the ability to combine basic truths and connections to produce facts. He believed that this would result in all possible knowledge being generated and recorded. These ideas were further developed by Gottfried Leibniz, René Descartes, and Thomas Hobbes. This resulted in them attempting to develop a system of calculated reasoning using symbols, such that all human thought could be replicated through calculation. This is now known as the physical symbol system hypothesis.

In the 20th century, the physical symbol system was further developed by mathematicians and logicians such as George Boole and Friedrich Ludwig Gottlob Frege. The extensive study of mathematical logic eventually had a breakthrough, Bertrand Russel and Albert North Whitehead’s masterpiece- the Principia Mathematica. The three volumes of the book managed to display to the utmost extent the power of symbolic logic, and how the advances in philosophy and mathematics could greatly benefit the world. Though the book is extremely complex and long, it managed to make the study of logic popular, and had several breakthroughs within it, mostly concerning meta logic.

There was one question that still lingered though. How extensively could symbolic, mechanized logic understand human thought? Could all mathematical reasoning be formalized?

This question was answered by Kurt Friedrich Gödel’s Incompleteness proof, Alan Turing’s Turing Machine, and Alonzo Church’s Lambda calculus. These managed to prove that there indeed were limits to the power of formal reasoning. However, within these limits, any form of mathematical reasoning could be formalized. This was the breakthrough that prompted scientists to finally consider the possibility of building machines that could think and reason.

The Genesis of true AI research

Research into Artificial Intelligence was facilitated by breakthroughs in Computer Science. Several scientists had theorized the possibilities of such machines even before the 20th century. Ada Lovelace had predicted that “it [the Analytical Engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine. Supposing for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”.

The first modern computers were ones made as code breakers during World War 2, ENIAC and Colossus, which were developed by John Von Neumann and based on the theoretical foundation created by Alan Turing.

Recent research in neurology had discovered that the brain essentially worked like an electric network made up of neurons that used binary pulses to transmit information. The field initially relied on three basic concepts: Norbert Wiener’s Cybernetics (The study of communication between animals and machines), Claude Shannon’s information theory, which talked about digital signals, and Alan Turing’s theory of computation, which theorized that any form of computation could be described digitally. These showed that it was possible to construct an entirely artificial brain, constructed with mechanical components.

In 1950, Alan Turing published Computing Machinery and Intelligence, which was the first serious proposal in the Philosophy of Artificial Intelligence. In it, he proposed a test, which is now known as the Turing Test, which states that if a machine could carry a conversation with a human being that was indistinguishable from a conversation between two human beings, then it could be said that the computer was “thinking”.

In 1951, Christopher Strachey wrote a program that could play Checkers, and Dietrick Prinz wrote a Chess program. These were considered, at the least, to be primitive versions of Artificial Intelligence, since they were able to compete with human beings. Such programs, now called Game AI, have become a measure of progress in AI ever since.

The earliest ventures in building Artificial intelligence systems were experimental robots, built using only analog circuitry. A particularly impressive one was called the Johns Hopkins Beast, which was built in the Johns Hopkins University Applied Physics Laboratory. It could roam through the laboratory, and recognize wall outlets so it could plug itself in and recharge itself. This was accomplished without the use of computers, rather, the Beast used dozens of transistors that would control analog voltage. A sonar guidance system was developed for it, giving it the ability to determine it’s position in the laboratory, and recognize obstacles in its path.

Logician Walter Pitts and Neurophysiologist Warren McCullogh analyzed networks constructed from artificial neurons, and demonstrated how these could implement simple logical functions. These were the beginning of what we now call neural networks. Marvin Minsky and Dean Edmonds, who were inspired by Pitts and McCullogh, went on to build the first neural net machine, called the Stochastic Neural Analog Reinforcement Calculator (SNARC) in 1951. It was a randomly connected network of over 40 synapses, based on Hebbian theory (which states that repetitive firing between two cells can cause them to increase their efficiency in this activity). This was one of the first practical applications of theories that became the basis of the concept of Unsupervised Learning. Minsky became a founding member of Project MAC at MIT, and has been one of the most important innovators in the field of AI ever since.

In 1956, a conference was held in Dartmouth, organized by Minsky, John McCarthy, Claude Shannon, and Nathan Rochester. There, McCarthy persuaded the participants to accept “Artificial Intelligence” as the name of the field. The conference was attended by several scientists, such as Ray Solomonoff, Arthur Samuel, and Allen Newell, who would create several important programs in the following years. Due to several important milestones being achieved, The Dartmouth Conference of 1956 is now considered to be the Birth of AI.

The Dawn of AI

After the Dartmouth conference, there was a period of explosive growth in the field of Artificial Intelligence, with several important programs being written in the following years. Computers were now able to solve algebra word problems, prove geometry theorems, and even speak English. Several government agencies, like DARPA, invested heavily in AI research.

Some notable programs that were created in this period were the General Problem Solver, which could reduce the number of possible solutions to a problem to the most optimal ones, Specific searching programs that were made solve algebraic problems and search through goals and sub-goals to plan actions, Natural Language processors such as Daniel Bobrow’s STUDENT and Joseph Weizenbaum’s ELIZA, and many others, which were mostly limited to processing words and numbers.

In the late 60s, Minsky and Seymour Papert from the MIT AI Laboratory proposed that AI research must focus on artificial situations called micro-worlds, where basic principles would be understood in terms of simple models such as friction-less planes and rigid bodies. Research was focused on “blocks world”, which is a flat surface populated by colored blocks of different shapes and sizes. This led to them inventing a robot arm that was able to stack blocks. Using the micro-world program, Terry Winograd, a professor of computer science, invented a program named SHRDLU, which could communicate with English sentences, and plan and execute operations.

DARPA continued to grant money to several institutions at least until the late 70s, such as project MAC, the CMU, and the Stanford AI Project, which have remained the main centers for AI research to this day.

However, this didn’t last. In the 70s, the researchers began running into problems, and started discovering hardware limitations that could not be resolved or overcome at the time. Even the most impressive programs were only good enough to handle the simplest versions of the problems they were meant to solve. The public had also begun to lose interest in AI research, and the fact that most of the programs existing at the time were essentially toys that required extremely expensive hardware didn’t help. Limitations such as limited computing power, inability to scale up in order to solve combinatorial problems, a limited amount of available data to train computers on, and the inability to adapt logic according to the specific problem were insurmountable. This resulted in an era called the AI Winter. Agencies such as DARPA and the NRC, disappointed with the lack of progress in the field, began to cut funding to research organizations and ending support. In 1966, ALPAC released a report heavily criticizing the failure of efforts to achieve the promised quality in machine translation.

The Curse of High Expectations

In 1973, James Lighthill released a report titled “Artificial Intelligence: A General Survey”, where he criticized the lack of progress made in the field in England, causing the end of AI Research in England at the time. DARPA canceled it’s annual grants to the CMU, and by 1974, it was difficult to receive grants for AI research.

Several researchers criticized the unrealistic predictions made by their colleagues, and the exaggeration of the possibilities of AI at that time. In the 60s, funding was focused on people rather than specific projects, and this was stopped after the passing of the Mansfield Amendment in 1969, after which DARPA focused solely on mission orient research, on projects such as autonomous weapon systems and battle management programs.

Even in the golden age, several philosophers had ridiculed the claims made by AI researchers. John Searle’s Chinese Room argument had stated that if a program cannot be shown to understand the symbols it uses, then it cannot be proven to be capable of thought. Hubert Dreyfus argued that human reasoning could not be replicated through symbolic reasoning and was more focused on intuition and practical knowledge. At the time, both Searl and Dreyfus were ignored by Minsky and most AI researchers. In particular, Minsky stated “they misunderstand, and should be ignored.”. According to Joseph Weizenbaum (the creator of ELIZA), the treatment of Dreyfus by his colleagues was unprofessional and childish. In subsequent years, several researchers would conduct an experiment that would prove Dreyfus right, including Peter Wason, Eleanor Rosch, and Daniel Kahneman.

When psychiatrist Kenneth Colby claimed that he had written a computer program that could conduct psychotherapeutic dialogue that was based on ELIZA, Weizenbaum criticized the assumption that a mindless program could be used as a serious therapeutic tool, causing a feud with Colby, which only became worse when Colby refused to credit Weizenbaum. He then published Computer Power and Human Reason, where Weizenbaum argued that the misuse of AI could devalue human life.

In 1975, Minsky published a paper where he formulated an approach to AI that was based on the common tools he saw his fellow researchers use. He noted that people tend to construct a framework when understanding an object, where they would combine a set of facts about the object, regardless if they were logical in nature. These structured sets would form a context for what they say or think. He called these structures “Frames”. Object-Oriented Programming would later use this concept to derive the idea of Inheritance.

The revival of AI research

In the 1980s, the Japanese government began to aggressively fund AI research, in an initiative to build powerful computers using parallel computing and logic processing. Over time, the field of AI had become successful over time. Corporations around the world began to use forms of AI to create programs called expert systems, and this became the focus of mainstream AI research.

Expert systems are programs that can answer questions and solve problems in specific domains, and use logical rules that were derived from expert knowledge. A notable example is MYCIN, which was developed in 1972 and could diagnose infectious blood diseases. Though expert systems usually restrict themselves to a small domain of knowledge to avoid the commonsense knowledge problem, their simple design made it easy to build and modify programs. And fortunately, these programs were genuinely useful, unlike most early AI programs.

By 1985, corporations had begun to spend over a billion dollars on in-house AI research departments, and this managed to create an AI industry, which included both hardware companies like Symbolics and software companies such as Intellicorp and Aion.

This decade also oversaw the first attempt to create a database that could contain knowledge that the average person would generally know, called Cyc. This project was led by Douglas Lenat, and he argued that there was no real shortcut to solving the commonsense problem. The only way to ensure an AI would have all the knowledge that any human would know would be to teach it, one concept at a time. Though it was thorough, this method would take decades to complete.

In 1989, the chess programs created by CMU, HiTech and Deep Thought, managed to defeat grandmasters. This created even more hype among the public for the future of Artificial Intelligence and Machine Learning.

Fulfillment of Promises Made, and the Future.

After over five decades, the field of AI research began to finally fulfill promises that its founders had made. Ever since the early 90s, AI systems have begun to be used in business, though mostly for back-end systems. However, its reputation has not become entirely positive yet. Even today, there is still disagreement about why there had not been satisfactory results in the 60s. Though the rise in computing power and cloud computing has made research cheaper and more feasible than before, AI has fragmented into competing sub-fields, that are focused on solving specific problems, and have separate titles to distinguish themselves from the tarnished title of AI. Regardless, the field has become more successful than ever before.

Since the late 90s, AI systems have achieved several superhuman feats, many of which have become famous. In 1997, the system Deep Blue, defeated world chess champion, Gary Kasparov. Robots such as the Nomad have been able to explore lands with extreme temperatures at a fraction of the risk that a human exploration team would have to face, and have even made the exploration of extraterrestrial bodies such as Mars, possible.

AI-based autonomous systems became common even in households, such as Sony’s AIBO. Web crawlers and other information extraction programs have made using the World Wide Net much easier than before. Gaming companies such as Nintendo and Microsoft have made wireless human body tracking systems that focus on accuracy and convenience, and these have sold millions of units and have only gotten better every generation. IBM has created computes capable of language processing at an advanced level, like Watson, which was able to defeat two champions at a game show called Jeopardy, and Project Debater, which can debate in real-time with human beings and has defeated debate champions.

Image processing AI that uses deep learning to identify objects in pictures have become common, and it is possible now, for anyone with intermediate programming knowledge, to create their own image detection network. Companies such as Boston Dynamics have created autonomous robots and taught them to replicate basic motor skills like walking and jumping.

AI systems such as the Alibaba language processing AI have managed to outscore top scorers at a Stanford University reading and comprehension test, and Google announced Google Duplex, which could flawlessly replicate human speech.

Though the massive disappointment of the 60s still haunts the field of AI research to this day, the development in computer technology has been growing exponentially in recent years. Through the Internet, anyone can now receive a free education in the field and join open-source programs to benefit humanity. The wealth of data from studies done in the past decades can now be re-analyzed in modern contexts to solve new problems, which has been extremely useful in fields such as Medicine and Engineering. Machine Learning has also been used to provide bionic limbs that have greater functionality than older products, and AI has also been used to make facial recognition and data processing systems, which have been extremely useful to law enforcement. Artificial Intelligence is a field with amazing potential, and eventually, we will be using these systems to accomplish tasks previously thought impossible.