the BIRTH of CONTROL

Original article was published by Yattish Ramhorry on Artificial Intelligence on Medium


the BIRTH of CONTROL

What Happens When AI Gets It Wrong

the ETHICS of ai

by yattish ramhorry

It’s not weak to change and adapt. Flexibility is its own kind of strength. In fact, this flexibility combined with strength is what will make us resilient and unstoppable. ~ MARCUS AURELIUS, MEDITATIONS, 8.16

Artificial Intelligence or AI as it is commonly known, is a system of mathematical algorithms, mathematical probabilities and statistics designed to learn from training data so that it is able to make predictions in real-world applications. But, mathematical probabilities can never be entirely accurate, and since nothing that is a mathematical probability can ever be 100% accurate it is bound to result in sometimes catastrophic failure.

Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.

So, what happens when an AI algorithm goes horribly wrong?

The latest US Census data shows that black and Hispanic populations have been historically under-banked. For AI to learn, it must be fed data. If the data shows that certain segments of the population are denied loans more often, it may falsely “learn” that those segments are greater credit risks, perpetuating a negative cycle.

In another example, Amazon abandoned a hiring algorithm in 2018 because it passed over female applicants in favor of male applicants for tech roles. The reason was simple — the learning program had been fed data of past applicants and employees, the majority of which were male. If AI only considers past data, the future will never change.

Back in the spring of 2016, Microsoft ran into a public relations nightmare when its Twitter chatbot — an experimental AI persona named Tay — wandered radically off-message and began spouting abusive statements and even Nazi sentiments. “Hitler was right,” tweeted the scary chatbot. Also: “9/11 was an inside job.”

To be fair, Tay was essentially parroting offensive statements made by other (human) users, who were deliberately trying to provoke her. Aimed at the coveted 18- to 24-year-old demographic, the chatbot was designed to mimic the language patterns of a millennial female and initially cut loose on multiple social media platforms. By way of machine learning and adaptive algorithms, Tay could approximate conversation by processing inputted phrases and blending in other relevant data. Alas, like so many young people today, Tay found herself mixing with the wrong crowd.

In the first known autonomous vehicle-related pedestrian death on a public road, an Uber self-driving SUV struck and killed a female pedestrian on March 28 in Tempe, Arizona. The Uber vehicle was in autonomous mode, with a human safety driver at the wheel.

So what happened? Uber discovered that its self-driving software decided not to take any actions after the car’s sensors detected the pedestrian. Uber’s autonomous mode disables Volvo’s factory-installed automatic emergency braking system, according to the US National Transportation Safety Board preliminary report on the accident.

In the wake of the tragedy Uber suspended self-driving testing in North American cities, and Nvidia and Toyota also stopped their self-driving road tests in the US. Eight months after the accident Uber announced plans to resume self-driving road tests in Pittsburgh, although the company’s self-driving future remains uncertain.

Driverless cars are the most pressing AI-related consideration for the insurance industry, with recent advances from the likes of Google, Uber, and Volvo making it likely they will dominate the roads within the next decade. In June, British insurance company Adrian Flux began offering the first policy specifically geared towards autonomous and partly automated vehicles. The policy covers typical car insurance staples such as damage, fire, and theft, as well as accidents specific to AI — loss or damage as a result of malfunctions in the car’s driverless systems, interference from hackers who have got into a car’s operating system, failure to install vehicle software updates and security patches, satellite failure or outages affecting navigation systems, or failure of the manufacturer’s vehicle operating system or other authorised software.

This is an important step forward, demonstrating that the industry is finally dealing with the problem.

Explainable AI

Explainable AI means asking an AI application why it made the decision it did. The Defense Advanced Research Projects Agency (DARPA), an agency within the Department of Defense, is currently working on a project called the Explainable AI Project to develop techniques that will allow systems to not only explain their decision-making, but also offer insight into the strong and weak parts of their thinking. Explainable AI helps us know how much to rely on results and how to help AI improve.

Auditable AI asks third parties to test a system’s thinking by giving it a wide range of different queries and measuring the results to look for unintended bias or other flawed thinking.

Fei-Fei Li, AI pioneer, former Google exec and the Co-Director of Stanford University’s Human-Centered AI Institute, argues that another way to help eliminate bias, especially in the areas of gender and race discrimination, is to get more women and people of color involved in developing AI systems. While that’s not to say that programmers are at fault for implementing bias into AI, simply having a broader range of people involved will stamp out unconscious leanings and bring to light overlooked concerns.

A Few Questions for All of Us to Consider

There’s no question that AI is already having a significant impact on our lives — many times without us even realizing it. What questions or concerns do you have about how it might be impacting you, those you know or those you serve? If your organization is using some form of AI in its decision-making processes, what steps are you taking to ensure that bias doesn’t accidentally creep into the picture? Feel free to share your thoughts in the comments section below.