Original article was published by Leo Sjöberg on Artificial Intelligence on Medium
Making Sense of Artificial Intelligence
Artificial Intelligence is everywhere, yet few people know what it is, and even fewer understand how it works.
The idea of Artificial Intelligence (AI) dates back to the 1950s, when Alan Turing published the book “Computing Machinery and Intelligence”, wherein he describes the “imitation game”. The imitation game is a hypothetical scenario where an interviewer asks written questions to two contestants. One contestant is human and one is machine. If the interviewer is unable to discern which is the machine and which is the human, then the machine is considered, to some extent, intelligent.
But the 1950s are a far way from 2020, so what does AI mean today? The answer will differ significantly if you ask someone in the technology industry, or if you ask an academic. While academia has attempted to narrow and refine the definition of intelligence and machine intelligence, startups will consider almost anything “AI”, and not without reason. In the first quarter of 2020 alone, AI startups received $8.4 Billion in funding, making for a strong incentive to claim AI capabilities among startups.
So what does the industry consider AI? AI is anything that seems smart, but beyond that, many companies look towards machine learning to build out their AI functionality. Machine learning, while only a subset of AI, is what many think of when they think of AI. Machine learning is what helps you avoid congested traffic in Google Maps. It’s what presents you with attractive Tinder matches just often enough to keep you swiping. It’s what Tesla cars use to make the car drive itself.
Fundamentally, machine learning can use one of two approaches: either you teach it, or it teaches itself. Self-learning, or “reinforcement learning” works well when you have many things changing and there’s no way for a human to cross-check the results and verify them. This works when you have a regularly changing environment for which you can define clear “success validation”. In other words, reinforcement learning does not require you to dig into details, but rather demands that you specify what “success” means. In practice, this means creating some criteria to continuously score the attempts. For example, an AI built for chess might define success as a measure relative to the number of rounds played and the number of chess pieces the AI player still holds in its possession. Meanwhile, teaching the AI, supervised learning, relies on providing massive data sets of existing identified knowledge, to teach the AI to recognise patterns. This is the basis for image recognition — it would be impossible for a machine to find pictures of “cats”, if it isn’t given some initial knowledge of what a cat is, so we label data for the AI, and then confirm that what the AI recognises is in fact a cat.
But most of us will never get around to building any AI of our own, so what does AI mean for the rest of us? For a few, such as warehouse workers, it might mean job losses, but for many, it means more time to do more meaningful work. Computers are extremely good at repetition and precision. Repetitive and highly precise tasks also happen to be what humans are quite terrible at, and so a big part of AI is building tools for us humans to use to make the rest of our work easier, and free up time for higher level work.
AI should not be a concern for most people, but it should spark interest. And it should be understood, because it’s hard to use a tool and comprehend its implications if you don’t have the slightest idea of how it works or what it does.