The Ethical Dilemma of Artificial Intelligence

Original article was published on Artificial Intelligence on Medium

Source: VHTeam

The Ethical Dilemma of Artificial Intelligence

It’s more than a conversation about future problems.

The rise of Artificial Intelligence (AI) in mainstream culture, and the futures depicted in movies and TV shows like Black Mirror and Ready Player One don’t seem so far off. This pressing reality, coupled with some disturbing behavior recently discovered in AI, has forced leaders of major tech corporations to have an open discussion about the ethics of AI, regulations, and the future of the human race.

Crash Course in AI

AI refers to computing hardware that can essentially think for itself and make decisions based on the data it’s fed. It uses a branch of technology called Machine Learning to imitate human intelligence and complex algorithms to process massive tons of information and come to an effective conclusion.

Source: Forbes

Present And Future Use Of AI Technology

We’re in the early stages of AI with things like Siri, Alexa, automated marketing emails, and facial recognition software, to name a few. But this tech is advancing so rapidly that some scientists believe we’ll reach what’s known as “the singularity” (when machines become self-aware) by the year 2045. The implications of this go far beyond savvy smart home devices and unlocking phones with our faces.

Imagine legions of self-aware bipedal military androids that could replace the need for foot soldiers. Self-driving cars that can navigate traffic, change lanes or take an exit without human assistance. AI-powered pathology tech that can diagnose a disease before the first symptom appears. The possibilities boggle the mind but equally so, the perils of harnessing that much power.

“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.” — ELON MUSK, MIT’S AEROASTRO CENTENNIAL SYMPOSIUM

Source: Parade

Regulations And Government Oversight

Mr. Musk isn’t the only one calling for regulations. Tech leaders and government officials echo this sentiment due to the increased use of AI in healthcare, criminal justice, and national defense.

Companies like Google have tackled this solution head-on by crafting a set of AI Principles they use to gauge their AI applications. But is this enough? Can Google and others be trusted to hold themselves accountable? Or should tech companies be held accountable to governmental authorities as well?

Some say no — comparing government oversight to a fox guarding a hen-house, while others are in favor of moderate oversight.

“The federal government should oversee, audit, and monitor. Individual agencies or collections of experts should handle the oversight rather than one big AI regulatory agency because each industry is governed by its own set of regulations.” -AI NOW INSTITUTE

Algorithmic Bias In AI: Problem And Solution

The ethics of AI is more than a conversation about future problems. It’s a present issue that’s reaping dire consequences. Remember, these machines learn by being fed data about the world (a world filled with corruption, bias, and injustice). So, it’s only natural that they would begin reflecting those biases.

From racial bias in facial recognition software to recruiting tools denying female applicants to training models showing signs of “cunning”, “aggression”, and “violence” to achieve their goal, AI is already imitating some of the worst of human behavior.

“The issue is, that often the data you’re training on reflects the world as it is, not the world how you would like it to be.” -JEFF DEAN, GOOGLE AI LEAD

So, the question becomes, how do you remove harmful biases or behavior from machine learning programs? You obviously don’t want to remove all bias because there’s good bias and bad bias. But how do you get these machines to choose the bias that works toward the common good?

Researcher Joy Buolamwinwi has made it her life’s mission to solve this problem. Founder of the Algorithmic Justice League (AJL), a diverse group of coders who find and eliminate bias in machine learning and AI programs, Joy believes who codes is just as important as what we code. It’s this belief that led Joy to recruit coders from different ethnicities, genders, religions, and social-economic backgrounds. The more diverse the coders, the less chance of having an algorithmic bias creep in, Joy says.

She also sings the same tune of many other tech companies who believe that the datasets used to train facial recognition or other kinds of apps need to be diverse as well.