9 Critical AI Weaknesses to Consider

Source: Deep Learning on Medium

HAL 9000, C3PO, The Terminator, Ava from Ex Machina, David from Prometheus, or TARS and CASE from Interstellar. These are usually what we think of when we think of Artificial Intelligence. Though certainly not human, these fictional computers are what we would call a Artificial General Intelligence (AGI). They think and communicate in a human-like way. It can learn and do lots of things — not just a single repetitive and limited task. It’s intelligence is seemingly superhuman, threatening to replace or overtake us in some instances. But for now, at least, these are strictly science fiction.

The AI we have today is classified as Narrow Artificial Intelligence (NAI). Think of Deep Blue — the chess-playing computer that beat Gary Kasparov, Watson, the Jeopardy playing computer that beat Ken Jennings, or Alpha Go, the Go-playing computer that been grandmaster Lee Sedol. These hyper-complex collections of matrices and algorithms are designed to perform very specific operations extremely well. When the tasks given to them have well-defined goals and desired outcomes they easily outperform human counterparts.

But for all their power and utility, Narrow AI’s have some very crucial weaknesses and shortcomings:

Let’s start with data.

1. Training Them Requires Enormous Amounts of Data

AI today is made especially useful through machine learning. Machine learning algorithms are trained to learn from experience so they can adapt to changes without needing to be re-coded every time. But the learning of a machine is not the same as human learning. In order to train the AI to do the job effectively, you need an enormous amount of data around the tasks being performed.

This is why those terms of service contracts and privacy policies are so prevalent around the web. The devices you use and the websites you visit are collecting valuable data points from you all the time. In some cases they are being and funneled into these algorithms to make them do incredible things. However, these incredible things come at the cost of privacy. If your organization depends on trade secrets, the data collection required to train an AI system, could prove risky. If an AI company is collecting information related to those secrets in order to build you an AI system, they or any party with access to that data may be able to reverse engineer products you create.

2. That Data is Full of Biases

Biases are an inescapable fact of life. We all have them and they are all embedded in the way we describe the world. Most are benign, but as we know, many are not.

For instance, some cities have started implementing PredPol — an AI based tool used to predict crime. To function properly, it must draw from large, historical data sets. The problem many critics have pointed to is that many of these data points were created during times when institutionally racist policies resulted in higher arrest numbers in impoverished areas, particularly of people of color. This skews the calculations it makes and makes to the officers using it like more crimes are expected to happen in those neighborhoods than may actually be the case. Critics claim this biased data inadvertently perpetuates cycles of arrest that can devastate communities for entire generations.

3. The Data is Vulnerable to Sabotage

We’re all familiar with fake news by now. Well, AI, has a lot of trouble determining real data from fake data. Humans seeking to intentionally sabotage or game or troll the algorithm can do so fairly easily — sometimes evading detection altogether.

TayAI is a classic case of this. Initially designed to be a “millennial” chat bot, this AI began spewing hateful racist, sexist, and homophobic speech. The reason: a group of internet trolls on 4chan banded together to intentionally sabotage it. The whole thing only took a few days. Luckily, this was obvious enough to be caught and stopped, but more subtle data corruption might go undetected.

4. The Systems They Run on Contain Proprietary Code That is Not Made Available to The Public

This is known as “blackboxing”. In other words, as far as an outsider knows, it’s just dealing with a magical black box that spits out your desired result. How or why it got that result is a trade secret. Considering what we just read about biases and sabotage, this lack of transparency could easily become a liability.

5. AI’s Have Trouble Understanding Contexts

Human language and communications is highly contextual. Jokes, puns, poems, slang and colloquial phrases all take on different meanings depending the who, what, when, where, why, and how of the given situation. Saying “I need a lift” can mean you need an elevator, you need someone to raise your spirits, you need someone to give you a ride in a car, or you need to physically lifted, depending on context. AI’s do not discern these meanings very well at all.

6. AI’s Cannot Discern Causes

Humans are good at identifying problems and wondering where they come from. Even the youngest child will incessantly ask “Why?”. Machine’s generally don’t. AI’s are pretty good at finding correlations, but as the saying goes: Correlation is not necessarily causation. It can find patterns, but that doesn’t mean that patterns it found are significant. If humans conflate the correlations found by AI with causation, the results can be disastrous.

7. AI’s Still Need Lots of Human Supervision to be Trained Properly

As it stands, Machine learning still requires a lot of supervision. Without human oversight, AI’s often make errors that humans would not ever make. In fact, an entire class of young workers in China are making a decent living simply supervising and checking the work of AI’s. Unsupervised Machine Learning is a much desired goal for those working in AI, but there is still no telling how long it will be before we are there.

8. AI Presents Us with Serious Moral Quandaries

If an autonomous vehicle plows into multiple cars and pedestrians, who is at fault? How can and should AI’s be used in war? How do we mitigate the effect of our biases that get encoded into the final output and data? Heard of the trolley problem? AI presents a whole new set of tricky moral issues even knottier than that.

To date, no one has come up with satisfying universal solutions to these issues problems. What’s even more troubling is that some authoritarian countries may have fewer qualms about these ethics violations than democratic ones.

9. AI Can Only Be as Good as The Data Used to Train It

By its definition, data is always a representation of the past and its patterns. As we have seen, there are many factors that can threaten the integrity of any given data set. If situations arise for which there is no comparable data from which to draw, an AI could hit a dead end. The code and algorithms that run the AI represent “perfect models” of the world originating in the human mind. But how well does it hold up in a dirty, complex world?

Even with machine learning or Deep Learning, the AI programmers must use labeled data, and set the rules and parameters for the learning to be done. As well as they work for us, language and logic are still tricky and sometimes insufficient tools for describing the phenomenal world around us. As they say: “The map is not the territory”. We may never fully collect and describe the world nor the complexities inherent in its many contexts in a way that would translate to an AI.

Conclusion: Narrow AI is Powerful, But Limited

These are just some of the issues we understand about AI today. Even as we review the issues mentioned here, it isn’t hard to imagine some very precarious scenarios emerging from these issues. Add to that the dogmas, media misrepresentation, and a tendency amongst technologists to buy into hype, and we have a recipe for potential disaster.

AI is undoubtedly powerful — which is exactly why we have to take it seriously and understand its weaknesses and nuances. Otherwise we may really end up “summoning the demon” after all — even with Narrow AI.