Can Animals Help Us Build Better AI?

Original article was published by Gunnar De Winter on Artificial Intelligence on Medium


Can Animals Help Us Build Better AI?

Machine learning/AI can do apparently impressive things, but navigating the everyday world is not one of them. Perhaps animal minds can lead the way.

(Pixabay, kalhh)

AI or nAI?

Machine learning has been making plenty of headlines in the past few years. Rightfully so, even though headlines tend to oversell. Advances in computing power, algorithmic complexity, data handling capacities, and models of learning mean that machine learning/AI is increasingly being used in many fields.

In previous posts, I have written about machine learning/AI in general science and art, but also more specifically in (warning, link fest) historical research, genetic enhancement, mental health, aging research (including the development of ‘aging clocks’), video game ecology, Hollywood, astrobiology, epidemiology, stock markets, and the job market.

Plenty of AI to go around, it seems.

And yet, all of these examples are very narrow in their implementation, hence this is known as narrow or weak AI. (I prefer the ‘narrow’. There’s nothing weak about the massive data analyses and pattern recognition some of these systems can do.)

These narrow AI’s are vastly superior to our minds. In very specific contexts and conditions. Take them our of their comfort zone and they fumble like a bull in a porcelain shop. A blindfolded bull. Being chased by drunk matadors.

These AI’s are, in other words, strongly bounded. Even the system responsible for the majority of AI headlines, Deepmind’s AlphaGo (and its successors), is limited in its purview. It excels at learning to play games. Turn-based games with a predefined set of rules. I doubt, though, whether it can tie its own shoelaces.

Of course, none of the above takes anything away from the great work that has been done on machine learning in the last decade or so.

Still, general AI, which — by defintion — is able to outperform human minds in several unrelated contexts, remains a dream. To return to the shoelaces, AI today lacks ‘common sense’, the ability to navigate the real world in all its complexity. Because I like metaphors: current AI is very good at exploring the data oceans, but it doesn’t know how to sail a real boat.

Animals, the other ‘other’ minds

A new paper (open access, so be sure to check it out) suggests that looking at animal cognition research might be a path to remedy AI’s lack of common sense.

Most efforts of endowing AI with common sense so far have focused on language. The Winograd schemas are the most well-know example of this. In these tests, an ambiguous sentence is provided and then the potential AI has to answer a question. The correct answer depends on understanding how the world works. For example:

Sentence: The falling rock smashed the bottle because it was [heavy/fragile].

Question: What is ‘it’?

Answer: Depends on whether the questioner chooses heavy or fragile, aka context matters.

(Pixabay, PublicDomainPictures)

However, many animals have an excellent sense of how their environment works. They know things fall, and they know heavy things falling are likely more dangerous than light things — a feather isn’t a coconut. And they don’t need human language to infer this. Could the focus on language be too anthropocentric, too human-biased?

For the authors of the paper, the answer is a definite ‘yes’. What’s more, they argue that thanks to the advances in deep reinforcement learning and 3D simulated environments, the time has come to leverage work in animal cognition research for ‘training’ AI. (One great example of this is the Animal-AI olympics effort.)

By dropping an AI agent into a simulated 3D environment it can learn — in a virtual hands-on way. Furthermore, looking at animal cognition research can provide us with ways to instill a fundamental understanding of key concepts, such as:

  • Object permanence and affordances. (The rock does not disappear when a curtain conceals it. Hey, I can use the rock to break open a stubborn shelled nut.)
  • Containers and enclosure. (I can put something in this for safekeeping. Huh, now I am the thing kept inside, can I get out?)

How to build general AI is still a mystery:

But we advocate an approach wherein RL [reinforcement learning] agents, perhaps with as-yet undeveloped architectures, acquire what is needed through extended interaction with rich virtual environments.

Maybe the answer is AI learning to tie its own shoelaces.