Why Artificial Intelligence Must Be Regulated

Original article was published on Artificial Intelligence on Medium


Why Artificial Intelligence Must Be Regulated

Photo by Etienne Girardet on Unsplash

Computers can be programmed to process data and take actions it was not explicitly programmed to take. The resulting data processing technology can be broken down into four stages:

  1. Information acquisition
  2. Information processing
  3. Decision selection
  4. Action implementation

My personal research background is in decision-support systems, which typically automate stages 1–2 and return patterns it found to the user, or sometimes 1–3 and suggest recommendations to the user. Automating all 4 stages, or — and this is key — automating the first 3 stages and unconditionally using the resulting recommendations makes the technology seem like it is “intelligent,” thus spawning the term artificial intelligence (AI).

Therefore, for the purposes of this post, I will use the term AI to mean partial or complete automation of data processing.

Especially when AI takes the form of machine learning, which creates statistical models from data to make recommendations or take actions, it becomes increasingly opaque to people. This increased opacity has effects on its use by individuals, its introduction into organizations, and its use in society.

Effects of AI’s Opacity on Individuals, Organizations, and Society

Individual Effects

When AI returns recommendations to a user, increased opacity makes it difficult for them to use the recommendations because they don’t know why the system is making them. Instead, forms of transparency, like the rationale behind the recommendations, are needed. For example, in a past project a team I was on found that visualizations of explicit trade-offs between a submarine’s speed and stealth were necessary for commanders to effectively use automated decision recommendations.

When an AI is fully automated like in unmanned vehicles, forms of transparency like interpretability are needed. This is because causes of their failures cannot be understood to both users or maintenance employees. For example, let’s say Uber or Lyft had self-driving taxis, but they were unable to pick me up or bring me to my intended destination. If the app told me that they were unavailable at this time because of inclement weather, I would at least understand and would plan my trip around that. Similarly, it would help engineers at those companies understand why they made less money on a certain day and how they could fix it.

Organization Effects

When AI is introduced into an organization, AI reinforces existing information asymmetries and power imbalances. This is because different people in an organization have different needs for data, and some have more power than others. As a result, stakeholders with more power can use AI to hide information or forbid others form hiding information from them.

For example, let’s say there is an AI tool to help employees write daily status reports by suggesting items based on their activities. If it tracked changes to the document made throughout the week, such a tool could also be used by a manager to track when their employees are working and what they were doing. Many managers who had that desire might actually just use an AI tool that just tracked that employees, and they would have the power to force them to install it on their machines.

In either of those cases, there is not much employees can do because they gave up some of their rights when they signed their employment contract.

Societal Effects

Like in organizations, AI introduced in a societal context reinforces existing information asymmetries and power imbalances, but in different contexts. In a context like law enforcement, AI tools could be used for surveillance and automating sentencing decisions.

Unlike in an organization, though, none of us signed up for this.

The Solution: Regulation

Therefore, regulation of AI technologies is needed. Since AI actions are harder to predict than traditional technologies, I would suggest that specifications of AI technologies should be in terms of their capabilities and their vulnerabilities. For example, an AI technology might have capabilities like an ability to process and understand trends in data about stock trading, or it might have vulnerabilities like computer vision’s lack of ability to “see” in inclement weather.

Individual Benefits

With this information, individuals can buy AI technology that better satisfies their needs to effectively understand its recommendations or to understand why fully-automated technologies might break. For example, an individual might reconsider using a food delivery technology that advertises some businesses over what you searched for because it makes them more money, or buying a computer-vision-based video game system if they learned that it doesn’t work very well with dark clothes or skin tones.

Organizational Benefits

When buying AI technology, an organization is taking responsibility for its actions. Therefore, the specifications of an AI technology’s risks could be used to determine its actual cost. This would help businesses such as a hospital reconsider purchasing an x-ray diagnosis AI if it is risks diagnosing a patient with pneumonia based on the type of x-may machine used because false diagnoses increase their bottom line.

Societal Benefits

While police would likely be unwilling to be transparent about who they are tracking and how, citizens deserve to know certain things. For example, why they are being bothered or arrested might be because they were incorrectly marked as a suspect by a facial recognition technology. Similarly, why they are sentenced might actually be because their skin color was marked as being an indication of recidivism. Once the cause of these situations are known — preferably in the form of regulations or laws beforehand — societal entities such as police forces can buy better AI products.