Original article was published on Artificial Intelligence on Medium
The Good old-fashioned “Symbolic AI”
Back in 1959, three AI pioneers set out to build a computer program that simulated how a human thinks to solve problems.
Allen Newell was a psychologist who was interested in simulating how humans think, and Herbert Simon was an economist, who later won the Nobel prize for showing that humans aren’t all that good at thinking. They teamed up with Cliff Shaw, who was a programmer at the RAND Corporation, to build a program called the General Problem Solver.
To keep things simple, Newell, Simon, and Shaw decided it was best to think about the content of a problem separately from the problem-solving technique. And that’s a really important insight.
Computers are logical machines that use math to do calculations, so logic was an obvious choice for the General Problem Solver’s problem-solving technique. Representing the problem itself was less straightforward. But Newell, Simon, and Shaw wanted to simulate humans, and human brains are really good at recognizing objects in the world around us.
So in a computer program, they represented real-world objects as symbols. That’s where the term Symbolic AI comes from, and it’s how certain AI systems make decisions, generate plans, and appear to “think.”
Good old-fashioned AI
If you’ve ever applied for a credit card, purchased auto insurance, or played a computer game newer than something like PacMan, then you’ve interacted with an AI system that uses Symbolic AI.
Modern neural networks train a model on lots of data and predict answers using best guesses and probabilities. But Symbolic AI, or “good old-fashioned AI” as it’s sometimes called, is hugely different. Symbolic AI requires no training, no massive amounts of data, and no guesswork.
It represents problems using symbols and then uses logic to search for solutions, so all we have to do is represent the entire universe we care about as symbols on a computer… no big deal.
Logic is our problem-solving technique and symbols are how we’re going to represent the problem on a computer.
Symbols can be anything in the universe: numbers, letters, words, bagels, donuts, toasters.
One way we can visualize this is by writing symbols surrounded by parentheses, like (donut) or (Shivam).
A relation can be an adjective that describes a symbol, and we write it in front of the symbol that’s in parentheses. So, for example, if we wanted to represent a chocolate donut, we can write that as chocolate(donut).
Relations can also be verbs that describe how symbols interact with other symbols. So, for example, I can eat a donut, which we would write as eat(Shivam, donut) because the relation describes how one symbol is related to the other. A symbol can be part of lots of relations depending on what we want our AI system to do.
All of our examples in this article will include a max of two symbols for simplicity, but you can have any number of symbols described by one relation. A simple way to remember the difference between symbols and relations is to think of symbols as nouns and relations as adjectives or verbs that describe how symbols play nicely together.
This way of thinking about symbols and their relations lets us capture pieces of our universe in a way that computers can understand. And then they can use their superior logic powers to help us solve problems. The collection of all true things about our universe is called a knowledge base, and we can use logic to carefully examine our knowledge bases in order to answer questions and discover new things with AI.
This is basically how Siri works. Siri maintains a huge knowledge base of symbols, so when we ask her a question, she recognizes the nouns and verbs, turns the nouns into symbols and verbs into relations, and then looks for them in the knowledge base.
Let’s try an example of converting a sentence into symbols and relations, and using logic to solve questions.
Let’s say that “Shivam drives a smelly, old, car.” I could represent this statement in a computer with the symbols Shivam and car, and the relations drives, smelly, and old. Using logical connectives like AND and OR, we can combine these symbols to make sentences called propositions. And then, we can use a computer to figure out whether these propositions are true or not using the rules of propositional logic and a tool called a truth table.
And the truth table helps us decide what’s true and what’s not. So, in this example, if the car is actually smelly and actually old, and if Shivam actually drives the car… then the proposition, “Smelly car AND old car AND Shivam drives the car.” is true.
We can understand that sort of logic with our brains: if all three things are true, then the whole proposition is true. But for an AI to understand that, it needs to use some math. With a computer, we can think of a false relation as 0 and true relations as any number that’s not 0.
We can also think of ANDs as multiplication and ORs as addition.
But let’s look at what happens to the math if the car is not actually old. Again, our brains might be able to jump to the conclusion that if one of the three things isn’t true, then the whole proposition must be false.
But to do the math like an AI would, we can translate this proposition as true times false times true, which is 1 times 0 times 1. That equals 0, which means the whole proposition is false. So that’s the basics of how to solve propositions that involve AND.
But what if we want to know if Shivam drives a car and that the car is either smelly OR old? Like I mentioned earlier, OR can be translated as addition. So, using our maths rules, we can fill out this new, bigger truth table.
Another logical connective besides AND and OR, is NOT, which switches true things to false and false things to true. And there are a handful of other logical connectives that are based on ANDs, ORs, and NOTs.
One of the most important ones is called implication, which connects two different propositions.
Basically, what it means is that IF the left proposition is true, THEN the right proposition must also be true. Implications are also called if/then statements. We make thousands of tiny if/then decisions every hour (like, for example, IF tired THEN take nap or IF hungry THEN eat snacks).
And modern Symbolic AI systems can simulate billions of if/then statements every second!
Simply put, An implication is true if the THEN-side is true or the IF-side is false.
Using the basic rules of propositional logic, we can start building a knowledge base of all of the propositions that are true about our universe. After that knowledge base is built, we can use Symbolic AI to answer questions and discover new things!
So if we populate a knowledge base with some propositions, then a program can find new propositions that fit with the logic of the knowledge base without humans telling it every single one. This process of coming up with new propositions and checking whether they fit with the logic of a knowledge base is called inference.
Expert Systems & Advantages over Neural Networks
Over the years, we’ve created knowledge bases for grocery stores, banks, insurance companies, and other industries to make important decisions. These AI systems are called expert systems because they basically replace an expert like an insurance agent or a loan officer. Symbolic AI expert systems have some advantages over other types of AI that we’ve talked about, like neural networks.
- First, a human expert can easily define and redefine the propositional logic in an expert system. If a bank wants to give out more loans, for example, then they can change propositions involving credit score or account balance rules in their AI’s knowledge base. If a grocery store decides that they don’t want to discount hotdogs during the sandwich-sale, then they might redefine what it means to be a sandwich or a hot dog.
- Second, expert systems make conclusions based on logic and reason, not just trial-and-error guesses like a neural network.
- And third, an expert system can explain its decisions by showing which parts were evaluated as true or false. A Symbolic AI can show a doctor why it chose one diagnosis over another or explain why an auto loan was denied. The hidden layers in a neural network just can’t do that… at least, not yet.
This, so-called, “good old-fashioned AI” has been really helpful in situations where the rules are obvious and can be explicitly entered as symbols into a knowledge base.
But this isn’t always as easy as it sounds. How would you describe a hand-drawn number 2 as symbols in a number knowledge base? It’s not that easy.
Plus, lots of scenarios are not just true or false, the real world is fuzzy and uncertain. As we grow up, our brains learn intuition about these fuzzy things, and this kind of human-intuition is difficult or maybe impossible to program with symbols and propositional logic.
Finally, the universe is more than just a collection of symbols. The universe has time, and over time, facts change, and actions have consequences.