The integration of machine intelligence into daily life is increasing, countless innovations appeared as implicit and explicit extensions to traditional systems and services that already exist. One of the most impressive things about the AI improvement of everyday systems is that they can anticipate precisely on specific (human) situations. An example is traffic forecasting service JamBayes, which overlays predictions about future traffic conditions on a digital traffic flow map. The system combines multiple variables in order to consider key contextual evidence and the dynamics of flow across a greater area to predict when the traffic flow will stuck (Horvitz, 1). JamBayes is an AI example with a great impact of detection and forecasting on daily life. The AI forms of anticipating, work on a ‘surprise model’ which considers user’s context-sensitive expectations about current and future situations. These AI systems such as JamBayes learn everyday from human computer user’s activities (Horvitz, 2–3). The AI products that optimize our lives, are actually the reflections of our own daily life. These reflections, or actually the gaps in our daily life, are now being fulfilled by AI products. AI products that can assist us in our limited human functions.
AI is thus mainly used today to solve social problems. Another aspect that has to be developed in AI is social skills itself. Most research in robotics aims in the development of robots a ‘technical form of intelligence’. Robots can use effectively and with high precision a manipulator, or recognizing specific patterns. It became important that robots should support humans and that means that they have to be ‘human like’ to communicate with humans. This competence is needed for the acceptance and ‘social integration’ of the robot into society and in order to build an ‘autonomous’ robot. The robot is becoming a human because he is our service partner in daily life, but first he has to show us a certain degree of respect, flexibility, adaptivity, robustness and autonomy (Dautenhahn, 334). Ichiro Kato wrote in 1991 already a specification of a future humanoid welfare robot which have to become capable and adapting to human motion and feelings. This kind of performance can achieve personal relationships which are even more useful to develop the deep learning system of a robot about appearance, behavior or goals and beliefs of the interaction partner. The problem is that this form of behavior can never be fully achieved, because this cannot be defined as the ‘rational manipulation’ of others. But their neural network can come closer to individual feelings like emotional involvement and empathy (Dautenhahn, 335). To achieve this, robots are now grown, or allowed to ‘grow up’ in certain environments to reach the level of social intelligence from a particular field. It seems to have been a change in paradigm from ‘complete automation’ to ‘people-oriented’ problem solving, where people are explicitly used as ‘teachers’ for machines. Today this results in a kind of man-machine symbiosis (Dautenhahn, 338).
The last couple of years we want to come closer to each other, instead of being driven apart. According to Lidewij Edelkoort, we live in an uncertain society that is suffering from the traces of the economic crisis (Edelkoort, 2015). That is exactly why our attitude is still not convincing against AI. Technology seemed to be in the first case one of the causes of this feeling of driving apart, but it can also be seen from a different point of view. The last couple of years were a hardship and made humans overly protective (Edelkoort, 2015). Artificial Intelligence can offer humanity a structure to order and restore our environment. Our society is facing the challenge in realizing technologies that will benefit humanity today. All the risks will arise out of human activity from certain technological developments: synthetic biology, nano-technology and artificial intelligence (Shaping Tomorrow, 2014).
But on the other hand we have to be aware of the other side of the ‘AI game’. AI is getting smarter, but what will happen if the world is actually completely obsolete by AI? If AI is acting as the brains of ‘negative’ super machines? Nigel Smart (founder of Dyadic Security and Vice president of the International Association of Cryptologic Research at the University of Bristol) argues that it is not just about having smart robotics. In a few years the first ‘quantum computer’ will be built, which means that: the internet will not be a safe environment anymore (Adams, 2017). This is a worse case scenario, but clearly, there is no quantum computer led by a determined party without a solid ‘Quantitative Reliability at Confidence’. The quantum computer (as the ultimate form of AI) is be able to solve the world’s human problems, and thus likely to power all the AI systems on earth, which can become incredibly dangerous on the ‘wrong’ side. Today our AI system is just an advanced machine learning software with extensive behavioral algorithms that can adapt themselves to our preferences. These systems will not get smarter in the existential sense, but they will improve their skills and usefulness everyday (Adams, 2017). Something of which we must remain aware, and must be alert to what extent that happens.
Adams, R.L. “Applications of Artificial Intelligence in Use Today.” Forbes. 2017. https://www.forbes.com/sites/robertadams/2017/01/10/10-powerful-examples-of-artificial-intelligence-in-use-today/#26547e4420de
Dautenhahn, Kerstin. “Getting to know each other: Artificial Social Intelligence for Autonomous Robots.” Vol 16, Issues 2–4. (1995): 333–356.
Edelkoort, Lidewij. “Gathering.” Trend Tablet by Lidewij Edelkoort. 2015.
Horvitz, Erik. “Machine Learning, Reasoning, and Intelligence in Daily Life: Directions and
Challenges.” Microsoft Research Vol 6, Issue 1. (2006): 01–06.
Shaping Tomorrow. “Artificial Intelligence — impacts on society.” 2014.
https://www.shapingtomorrow.com/home/alert/275454-The-Future-of Intelligence — impacts-on-society