AI’s obsession with the seasons

Source: Deep Learning on Medium

AI’s obsession with the seasons

Photo by Chris Lawton

The Big Freeze

The 20s have arrived and, on cue, the past few months have delivered a raft of decade-in-review retrospectives by commentators from the worlds of entertainment to science, sport to politics, and everything in between. Such stories are a staple of the major media outlets and it was no great surprise to stumble upon Sam Shead’s BBC article this weekend, which wraps up the past decade’s progress in AI with gloomy speculation that we may be on the verge of another AI Winter; that progress in the field is starting to plateau.

Credit to Shead, the sensational headline and early pessimism gives way to a fairly balanced article that distinguishes between progress in the two AI sub-fields of Narrow AI (ANI) and General AI (AGI). He describes how, following much early hyperbole, AGI — the ability of machines to reason, represent knowledge, plan, learn, communicate and integrate these skills towards a common goal — has failed to deliver on early promises made on its behalf.

It is perhaps telling that Shead’s ‘Winter’ is reference to a trough in hyperbole surrounding AI, as opposed to a trough in actual progress, because the rate of gains in narrow, domain-focussed AI (sometimes called Weak AI — the most unkind of all the labels) has been nothing short of mind-boggling.

Narrow Gains

Much of this progress has been made possible by improvements to the AI researcher’s toolkit.

I first began tinkering with Neural Networks, a staple of Narrow AI, twenty years ago, when every component — from file handling to routines for calculating gradient descent — had to be built from scratch. Since then, data manipulation, processing and storage technology has rapidly progressed thanks to the efforts Google, Amazon, IBM and others. Each has built powerful, universally accessible tools for the development and deployment of AI solutions. Combined with the onward-march of Moore’s law and increases in parallelisation, these advances have enabled businesses and organisations to invest in AI capabilities that create real practical and commercial value.

One must only look to their own everyday experience of interacting with consumer technology to see this progress; from voice recognition systems that interpret your questions when you call your bank, to recommendations for new Netflix series’ you might enjoy since you seemed to enjoy Breaking Bad so much, to automatically tagging and organising the photos you share with friends.

Amid talk of an AI Winter, there are no signs of organisations walking away or scaling back further investment in these narrow domains.

As with many aspects of technological progress, advances in Deep Learning are the product of technology combinations; like the pairing Software Development Kits (SDK) for building machine learning algorithms with development of specialised Tensor Processing Units (TPU) that result in a compounding of technological benefit. In this regard, progress in ANI has come from doing more of the same, only better. Impressive as these developments seem, they’re the product of brute-force. Most of the fundamentals are the same today as they were at the start of the millennium, just easier and faster to implement.

For this reason, I believe that progress of Deep Learning in the ’10s will now go much the same way as other advances like Optical Character Recognition or Chess-playing algorithms, in that as it becomes more normalised, the loss of mystique causes us to recategorise the technology so that we no longer class it as AI at all — yes, it’s impressive, but it’s not really intelligent is it!

On the Horizon?

So, what of the prospects for progress in AGI?

When it comes to developing systems that can do those things that biological brains can do, many daunting puzzles remain; most challenging of which include understanding the mechanisms that give rise to emotional experiences, consciousness, planning function and creativity.

Let’s assume that for an AGI to be classed as such, it must demonstrate a degree of competence at meeting the definitions described at the beginning of the article; the ability to reason, represent knowledge, plan, learn, communicate and integrate these skills towards a common goal. They’re a little lightweight but they’ll do for now.

Whilst there are clearly many aspects of animal brain function we are yet to understand that are necessary to solve the challenge of AGI creation, it is unlikely that we must understand every aspect. In much the same way that I don’t need to be an expert in molecular biology to successfully grow tomatoes in my garden. A level of abstraction or imperfection in the components is acceptable, assuming our only objective is to get an AGI ‘off the ground’. Such imperfections could bring with them other issues in relation to control, but I will save that discussion for another time.

For example, in relation to the mechanisms listed above, I believe that those giving rise to planning function and creativity are essential for a system to exhibit the behaviours of an AGI. Both require inventiveness and the ability to imagine and originate something new (or at least a capacity to combine existing thoughts into new ideas).

One only need look at the work of Oscar Sharp in the short film Sunspring, the screenplay of which was written by a Recursive Neural Network, to understand just how far adrift we are in this domain and how dumb (relatively speaking) ANI can be. (If and when you watch it, consider that it also benefitted from interpretation by a BAFTA award winning director!)

Other mechanisms found in brains may be less fundamental to the creation of AGI. Emotion for example is an integral part of the human thought process. But whilst it forms part of human decision making, it undoubtedly impedes truly rational decision making. I consider an AGI that’s never been to my house before, but that’s able to prepare a gourmet meal from scratch for my dinner party this evening, no less capable because it doesn’t experience fear the guests may not like its cooking, in the same way that I would. That said, understanding more about emotions may help to solve some of the problems of AI control, and ways of convincingly simulating those emotions would undoubtedly help with human adoption.

If Not Now, When?

So, for progress to be made on AGI, a better understanding is needed many of the higher-level cognitive functions of biological brains and how, in turn, they leverage the lower-level neuronal functions that every day we’re getting better at emulating. This is the paradigm shift that’s needed to broaden today’s constrained horizons in regard to AI technology.

These are indeed big hurdles, but we may be closer to achieving this aim than we first think.

When talking about AGI, I’ve noticed a tendency for people to isolate the resulting intelligence, as if it must be a discreet entity, like Arnie in the Terminator, or some God-like embodiment. A kind of anthropomorphism. The term singularity, used to describe the point at which technological growth crosses into uncontrollable and irreversible expansion, itself conjures images of a solitary super-being.

But if the history of technological advancement has taught us anything, it is more likely that the AGI will be a somehow better connected. During the late 1990s we marvelled at technology’s ability to cram information onto a single CD ROM that a few years earlier would have spanned the volumes of an Encyclopaedia Britannica in the local library. Today we think nothing of the fact that a body of knowledge many orders of magnitude larger is accessible online at any time of the night or day, from home, on a train and countless places in between. Resources of such scale were unimaginable at the time of the CD-ROM or the first Terminator movies, but the combinatorial explosion brought by a quarter-century’s compounded improvement to hardware, mobility and infrastructure, make it quite mundane and, incredibly, more often than not it’s free!

As Yuval Noah Hararo points out in Homo Deus, we need only look at the cognitive revolution that took place in apes around 70,000 years ago. Then, just a few small changes to DNA and a bit of ‘hardware’ rewiring gave rise to the general intelligence we perceive in today’s humans. Long before the collective resources of Google, Amazon and Facebook and the seemingly immutable growth of computing power had a hand it, an evolutionary switch was flicked.

The application of a meaningful research and development effort towards solving some of the puzzles surrounding mechanisms for higher cognitive function could well be enough to begin to knit some of the Narrow AI capabilities already built, together with the vast expanse of knowledge of the Internet. Suddenly we may begin to recognise the result as something resembling an AGI. Those stuttering, constrained first steps may somehow resemble those that a toddler might take. But, as any parent will attest, those abilities will develop very quickly leaving them struggling in vain to keep up.

From that point on, one can only imagine the possibilities. In Human Compatible, Stuart Russell’s book on control in AI systems, he points to the possibility that all the knowledge components needed to cure cancer in all its forms might be available, online, right now. It may that we just need a little help from someone or something — our newly created AGI perhaps — to assemble them in the correct sequence, and so another great milestone of bearing testament to human ingenuity is reached.

Granted, significant hurdles remain meaning we’re unlikely to be on the verge of any breakthroughs, for the next few years at least. As Russell goes on to explain, problems of applying common sense to language processing; how cumulative knowledge can be attained by the AGI; or how hierarchies of plans and sub-plans for delivering some outcome are conceived, must all be tackled. But our experiences of exponential technological growth should teach us that the challenges that today appear insurmountable, rapidly become a vanishing dot in the rear-view mirror of our combined technological efforts.

So, as we reflect on remarkable progress in the AI field this past decade and wonder at what’s to come, any talk of winter seems misguided. For AI, spring has just sprung, and the year’s set to be a scorcher!