Learning to change quickly



Being open to robot helpers

“closeup photo of white robot arm” by Franck V. on Unsplash

by Mike Meyer

One of the many things that is requiring uncomfortable change is how we define our new tools. The history of Homo sapiens is a history of tools devised to augment our biological abilities. It has long been said that our tools define us and made us what we are.

While it was once maintained that Homo sapiens were ‘different’ from other creatures because we could make and use tools, that has, thankfully, been recognized as hubris. Many other animals make and use tools from otters cracking oyster shells with a rock to crows and related birds envisioning and making sophisticated tools to solve problems. Needless to say our fellow primates are nearly identical to us up to the age of three when Homo sapiens begin to seriously use language.

Tools are very important for the most sophisticated life forms on this planet. We just take it a lot farther. Now we are are taking tools to an entirely new level and people are beginning to freak out. Mostly they are getting confused.

Intelligent devices are tools that we have designed to augment our biological limitations. The limitations now include the difficulties we have in processing big data while monitoring thousands of variables and applying models to make decisions. This is not different from using highly sensitive devices to measure minute amounts of carbon in our atmosphere or break out specific chemicals in our blood.

The difference is handling basic decision making by the use of models based on machine learning. We have tools that our now able to handle many things that were limited only to humans until a few years ago.

This led, naturally (and I use that word carefully), to fears of the great machine apocalypse as long visualized in books and movies. But as things have become rapidly more real in our everyday world we are extending that fear in all directions. That is a problem.

That is not to say that there are no threats or problems that could cause great grief. Change is about learning new things and how to use them as well as learning about the problems that can be caused. But these things are our tools. As they become more capable and sophisticated they need to be visualized in new ways. Failure to do that can mean failure to take advantage of things that we really need.

Today’s intelligent devices are growing more powerful and it is very important to understand what they do and how they do it. The requirement for being an effective human being now is higher because ignorance in these things will lead to many errors and may be more dangerous than the visions of a Matrix like machine domination.

The rapid development of Machine Learning allows these devices to build models from data and actions on that data that can be used to monitor and control processes of various kinds. These are usually typified by systems such as AlphaGo, a game learning system, that is able to play a game thousands of times and determine the best way to win. Chess has long been ruled by machine and go, the much more complex Asian game, has also now been mastered.

But in both cases this was done by a machine designed specifically for that one purpose. This is very similar to a person who does almost nothing but play chess for ten years and, thus, becomes a master. But even the most boring chess aficionado can do many other things. Our intelligent devices are very limited in that sense as they really can only do a few things but do them extremely quickly and for as long as we tell them to continue.

At the same time they can be tremendously useful because they can learn a large range of conditions with corresponding actions in specific situations. They can be very precise in determining the correct action and they perform tirelessly in situations that soon bore a person.

But we do not yet have sentient machines with the vast flexibility of humans or even animals. This includes balancing conditions against environmental standards or determining very fine emotional responses in other humans. Other animals are far better at that than our intelligent devices for now. The normal family pet is a domesticated animal that is the product of thousands of generations of evolution in close proximity to people. Your family dog knows how you react to any number of actions and how weather, for instance, may affect those actions. Our intelligent machines will get there but they are not there yet.

Our immediate problem is bringing people up to a level of understanding our current stage of intelligent devices. Specifically what they are good for and how best to use them while limiting their use in areas that may be currently risky. This is a fast changing situation and these conditions will change steadily. What is risky today may not be risky at all next year. This is the real challenge in this type of accelerating change.


One area that is getting increasing attention is robotic warfare. While many people have great fear of a robot with a gun, others are not worried at all. As much of our technology in the 20th century was developed for military applications this is a major area of concern. Realistically we need to understand ourselves. It is not the machine that is going to go ‘bad’ because the machine will be designed to do things that we have decided it should do.

We will have to design these intelligent machines with ability to identify situations that are not appropriate for certain types of action. That is what must happen but there will be failures in doing that. We will need to constantly monitor what is being developed and how to define controls of the things that the machine is being instructed to do. We need to use computational thinking as we work with these machines.

A very important act to remember is that Homo sapiens is constitutionally unable to build a tool and then not use it. That is our nature so we need to not hide from that reality. Even if we decide never to use that tool someone else will justify it and do it whether we like it or not.

If make something that can go Boom then someone will want to make it go Boom. If it can destroy a whole city then someone will find a justification to do that. No point in belaboring this but it is what we are. Let’s not blame the machine. Things that are deadly need to be managed and controlled but it doesn’t mean they will somehow go away and never be used. The sad American experience with out of control firearms shows how deadly the wrong ideas on this can be.

The following reference made this very clear to me. The quote is from an article on the question of Artificial Intelligence and Robotic Warfare. While this is a major concern for all of us who work in IT and ML issues this is not understood by the general population.

Citing a recent survey of 27,000 people by the European Commission, Arkin said 60 percent of respondents felt that robots should not be used for the care of children, the elderly, and the disabled, even though this is the space that most roboticists are playing in.

Despite the killer robot rhetoric, only 7 percent of respondents thought that robots should be banned for military purposes. — ZDNet

Paro Charging with pacifier- wikipedia

As noted above, the first areas that we are looking at for robotic assistance is care for the elderly and incapacitated. This is an official policy of the Japanese government that has been placing robots with elderly people living alone for the last two years. With this practical experience I expect that a similar survey to the one cited above if run in Japan would have a very different outcome on where to use or not use robotic helpers.

As policies and programs for implementation of intelligent devices in human society expand to different culture we could have major problems dealing with attitudes that run counter to what can be done safely and with great benefit. This is the confusion that concerns me.

Paradoxically also in England we have the growing use of the Japanese Paro, a robot that looks like a seal pup, as emotional support for Alzheimer’s patients.

Education will be the next major area of human augmentation by robots. Particularly early education takes great patience and tenacity yet is a poorly paid position and hard to fill. Whatever people think this will be an area that becomes increasingly the work of artificial intelligent robots.

Sputnik 中國– 新聞,評論與廣播

China is currently in the forefront with practical implementation of this. Keeko the robot teacher works with young children in Chinese kindergartens. The children have quickly come to love Keeko as Keeko is always patient and always consistent unlike human teachers who have been known to have bad days. And that is important.

My point is that this is not abstract or something for a future date but already in operation in other countries. These things are being found to be tremendously beneficial yet we have people who think they should not be done. That needs to be changed before it gets politicized and weaponized.

America has a major problem because of cultural provincialism and the reality of steadily falling behind in broadly used technology. This needs to be fixed.

Connect with the Raven team on Telegram

Source: Deep Learning on Medium