Original article was published on Communications of the ACM – Artificial Intelligence
May 20, 2020
In the summer of 2009, the Israeli neuroscientist Henry Markram strode onto the TED stage in Oxford, England, and made an immodest proposal: Within a decade, he said, he and his colleagues would build a complete simulation of the human brain inside a supercomputer. They’d already spent years mapping the cells in the neocortex, the supposed seat of thought and perception. “It’s a bit like going and cataloging a piece of the rain forest,” Markram explained. “How many trees does it have? What shapes are the trees?” Now his team would create a virtual rain forest in silicon, from which they hoped artificial intelligence would organically emerge. If all went well, he quipped, perhaps the simulated brain would give a follow-up TED talk, beamed in by hologram.
Markram’s idea—that we might grasp the nature of biological intelligence by mimicking its forms—was rooted in a long tradition, dating back to the work of the Spanish anatomist and Nobel laureate Santiago Ramón y Cajal. In the late 19th century, Cajal undertook a microscopic study of the brain, which he compared to a forest so dense that “the trunks, branches, and leaves touch everywhere.” By sketching thousands of neurons in exquisite detail, Cajal was able to infer an astonishing amount about how they worked. He saw that they were effectively one-way input-output devices: They received electrochemical messages in treelike structures called dendrites and passed them along through slender tubes called axons, much like “the junctions of electric conductors.”
Cajal’s way of looking at neurons became the lens through which scientists studied brain function. It also inspired major technological advances. In 1943, the psychologist Warren McCulloch and his protégé Walter Pitts, a homeless teenage math prodigy, proposed an elegant framework for how brain cells encode complex thoughts. Each neuron, they theorized, performs a basic logical operation, combining multiple inputs into a single binary output: true or false. These operations, as simple as letters in the alphabet, could be strung together into words, sentences, paragraphs of cognition. McCulloch and Pitts’ model turned out not to describe the brain very well, but it became a key part of the architecture of the first modern computer. Eventually, it evolved into the artificial neural networks now commonly employed in deep learning.
View Full Article
No entries found