AI: Separating the Wheat From the Chaff — Part 1

Source: Artificial Intelligence on Medium

AI: Separating the Wheat From the Chaff — Part 1

In recent years, only a few buzzwords have received as much hype and attention as Artificial Intelligence (AI). And it’s a highly-contested topic, with some recognizing AI as the bright future of tech and others warning against it being a slippery slope of science-fiction nightmares.

Either way, one thing is certain: the concept that triggers so much emotion is indeed making an impact on the world as we know it.

But what does AI mean, and how can businesses leverage AI to their benefit? More importantly, which new innovative AI technologies are critical and significant — and which are just buzz with no real substance?

In this series about AI and innovation, we will try to walk through the AI landscape, answer these questions, and shed some light on this fascinating, fast-growing field of technology.

AI: Understanding the Basics

What Is Artificial Intelligence (AI)

The term “Artificial Intelligence” was first coined during the Dartmouth conference back in 1956. The conference participants agreed to research the field where “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” In other words, the ability of a machine (in most cases a computer) to “think” like a human, constitutes AI.

The AI landscape has experienced several ups and downs in the 50 odd years since researchers began experimenting seriously with its various applications. Still, with the advent of modern technology and the accumulation of data, it is safe to say we are now experiencing one of the golden ages of AI, and it is impacting our daily lives in many ways.

Researchers classify AI into two major groups.

The first is “narrow intelligence” or “applied AI.” In this group, machines can carry out a single specific task. The algorithms in this group are limited in scope, but they can perform their tasks very well. In many instances, the algorithms perform much better than humans.

The second group is “general AI.”The machines in this group can perform a variety of tasks depending on the needs of the user. These types of machines should operate like the human brain, and they require much more computing power than “narrow intelligence” machines.

Machine Learning

So now that we understand what AI is, let’s dive deeper into a topic that is often used interchangeably with it — Machine Learning (ML).

Machine learning is a subset of the AI landscape. It refers to the concept of machines that can read data and use it to learn how to perform a particular task. The process of machine learning usually includes reading data sets that train the algorithm to recognize patterns in the data. Based on these training activities, when the algorithm encounters new data, it can make decisions based on the training data and the patterns it can recognize in advance.

Examples of Machine Learning

Machine learning has become a rather dominant technology in many aspects of our daily lives. Our social news feeds on Facebook and Twitter utilize machine learning algorithms to curate a feed that will suit us, based on historical activities and actions users with similar tastes have taken. Navigation apps like Google Maps and Waze use machine learning to help us come up with the shortest route to work or home. Spam filters use machine learning algorithms to identify and filter out spam messages.

Popular Applications of Machine Learning

One of the most common applications of machine learning is recommendation engines, and these are available in many flavors. You may not have noticed, but the newsfeed on Facebook is the output of a recommendation engine algorithm. It is rare not to find an e-commerce store without recommendations of items “you may want.” These are based on recommendation engine algorithms. Netflix is known for pioneering the TV shows and movie recommendations section as part of its user experience.

There are several approaches when it comes to using a recommendation system. The most popular is the collaborative filtering approach, where the algorithm assumes that if someone has liked a particular thing in the past, they will also like it in the future. It also assumes that if someone has agreed with someone else in the past, they will also agree in the future.

The second approach is called content-based filtering. It is basing its recommendations of a user’s past behavior and the linking of items or content it consumed in the past with items or content it may consume in the future.

Face recognition; voice commands processing, object recognition — all these are incredibly successful applications of machine learning that we get to use every day. If you use “Siri,” “Alexa,” or “Cortana,” you’re utilizing a machine learning algorithm.

The fact that these services have proliferated to this extent implies the massive advancement technology has made in this space, and it seems we are bound to experience new achievements and innovation in machine learning in the years to come.

What’s next?

Here, we took a deep dive into AI and ML and clarified the distinctions between the two subjects.

In part two of the AI series, we will review deep learning and artificial neural networks. These important subsections of AI, for many, are thought to be the future of AI and technological intelligence as we know it.

We will share everything you need to know about these cutting-edge developments, along with the most popular applications of these technologies today.