The Ethics of Artificial Intelligence

Original article was published by Ryan Hubbard, PhD on Artificial Intelligence on Medium


Photo by Amanda Dalbjorn

The Ethics of Artificial Intelligence: Navigating the Field

The rise of artificial intelligence is changing our lives in fundamental ways. Algorithms know us better than our friends and relatives, we outsource more and more of our decision-making to AI, and much of our socio-economic structure is reliant on algorithmic systems.

The future of AI will likely bring substantial benefits. However, like the development of any revolutionary technology, it will also bear costs.

Some of these costs are inevitable. It’s likely that we are exchanging more of our autonomy for higher levels of convenience and efficiency as we integrate AI into our lives. Other costs could be more significant and possibly catastrophic. It’s no surprise then, that the ethics of artificial intelligence is a growing field. Many institutions, for example, are setting up guidelines for the use and design of AI.

In this article, I lay out the terrain of the ethics of artificial intelligence.

What AI is

Any ethics of artificial intelligence needs to pin down what AI is, since even defining it is controversial. I’m going to defer to a definition provided by an expert group set up by the European Commission:

Artificial Intelligence is “software…systems…that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected…data…processing the information derived from this data and deciding the best action to take to achieve the given goal.”

If the goals of super-AGI conflict with human goals, there may be little we can do to prevent it from removing humans from the equation.

Simply put, AI is a system with a preprogrammed goal that gathers and uses data to pursue the goal more or less on its own. For example, YouTube uses AI — in the form of an algorithm — whose goal is to keep users watching YouTube videos. Fulfilling that goal helps YouTube fulfill the goal of any for-profit corporation: making money. The data the YouTube algorithm receives are videos its users watch. It then acts on this data by recommending videos you’d prefer.

These kinds of algorithms act as preference predictors. They are eerily accurate.

Three approaches in AI Ethics

The ethics of AI can be approached from many angles. One angle is to investigate the technology itself. AI technologies include autonomous vehicles, lethal autonomous weapons systems, and preference predictor systems like the YouTube algorithm.

Another angle is to examine the ethics of different AI methods such as deep learning systems or reinforcement learning. Specific AI methods have unique ethical issues. For example, deep learning systems can often be ‘black boxes’ where it’s unclear how the system came up with its decision.

AI systems are increasingly making decisions for us: the routes we take, the videos we watch, who we should date. How much of our own autonomy should we outsource to these systems?

This can be ethically troublesome for deep learning systems whose decisions affect the well being of others. AI has been used, for example, to predict recidivism rates, which are then factored into judicial sentencing. If it’s unclear how the AI came up with its recidivism prediction, then judicial sentencing based on this prediction may not be justified.

A third angle is to investigate how AI is applied in a certain sector such as military, healthcare, or judicial systems. A bioethicist interested in AI may, for example, investigate all the ways in which AI is used in healthcare.

Stefan Cosma

Three Dimensions of AI Ethics

Pak-Hang Wong and Judith Simon outline two dimensions of AI ethics. The approaches discussed above can examine AI ethics along one or more of these dimensions.

The first dimension is ethics by design which is more commonly known as machine ethics. This involves building AI in such a way that its behavior aligns with our commonly held values.

For example, designers of autonomous weapons systems would need to ensure that these systems do not target innocent civilians. Designers of autonomous vehicles must ensure that the vehicle will make good moral decisions in an emergency situation: the vehicle, for example, should avoid hitting a person even if that means hitting an animal.

The second dimension is ethics in design. This involves developing methods for the ethical evaluation of using AI. For example, should AI systems replace certain professions performed by humans, such as medical diagnoses by physicians?

AI systems are increasingly making decisions for us: the routes we take, the videos we watch, who we should date. How much of our own autonomy should we outsource to these systems? Is it worth the convenience? How autonomous should we make our AI systems?

Autonomous AI

The issue of AI autonomy is particularly poignant due to the risk of creating highly autonomous AI. Autonomy is the ability to make decisions on own’s own. It’s the capacity to be self-governing. Autonomous AI, then, can make its own decisions with more or less human involvement. AI autonomy comes in degrees, depending on how much control a human has over the system.

AI ethicists have categorized three degrees of AI autonomy: human in the loop, human on the loop, and human out of the loop. Let’s take an autonomous weapons drone as an example. The ‘loop’ refers to the sequence of events starting with data entering the AI system and ending with the system carrying out an action. Semiautonomous systems involve humans being ‘in the loop.’ In the case of a drone, this would involve the human playing an active role in the decision-making process.

The next degree of autonomy is a supervised autonomous system in which a human is on the loop. A human, for example, would supervise the AI drone’s decision-making process and be able to intervene when necessary. In a fully autonomous system, the human is out of the loop and may not be able to intervene in the decision-making process.

What it means to be human in the age of artificial intelligence is up to us.

It should be clear with the case of combat drones that risk increases as autonomy increases. On the other hand, the more autonomous the system, the faster that system can act. This is troublesome since this provides an incentive for states to develop fully autonomous weapons systems. Speed of action is an advantage.

This leads us to some general criticisms of developing certain kinds of AI.

Three Ethical Criticisms of Developing AI

One common criticism of developing AI is that it may undermine human autonomy. Our capacity to act autonomously is a unique feature of our personhood, thus AI has the potential to undermine our humanity. If we continue to outsource our decision-making to AI for the sake of convenience, then sooner or later we may become drones ourselves.

Another criticism is that AI may disrupt human relations, which is fundamental to our well-being. For example, many professions involve human interaction: a doctor giving medical advice, a teacher presenting material, a cashier ringing up an order, etc.

Arguably, the more AI replaces humans in performing these services, the less human interaction we will encounter. And this may diminish the richness of our interpersonal relationships.

The most alarming criticism is that artificial general intelligence (AGI) may threaten humanity itself. This may seem farfetched, but well-regarded philosophers take it seriously.

Nick Bostrom, for example, discusses the possibility of an intelligence explosion in which AGI develops the ability to improve itself, beyond what humans would be capable of. If the goals of super-AGI conflict with human goals, there may be little we can do to prevent it from removing humans from the equation.

Concluding Thoughts

The idea that AI could become an existential threat may be farfetched. Nevertheless, it’s clear that AI is increasingly shaping the human condition. What it means to be human in the age of artificial intelligence is, I believe, up to us. It will depend on how we choose to design, use, and regard AI. Crucially, it will also depend on how we come to regard ourselves as AI becomes a pervasive, inextricable part of our lives.