Source: Deep Learning on Medium
This is my take on what it means to understand for us, humans — how we do it, and why it is optional.
We actually have two separate facilities at our disposal, the brain’s neural net hardware and our rational mind that runs on top of the neural net. The latter has a different way of storing what we learned about the real world and using that information to make predict real-world outcomes.
Incidentally, that makes understanding (a. k. a. being rational) actually optional for us. So most of us don’t bother with (and are often discouraged from) gaining the understanding of things in the real world. Instead, we tend to rely on the neural net, as it is pretty amazing at guessing the right choices. It does it by treating and memorizing everything we sense (and any development in time) as pictures (mental images?). Then it can look for similarities with what it saw in the past to find the closest match and, by association, its next move. Beyond that, it has no clue what is on the picture. That is why it is always a guess, and it is inherently superficial. It will guess wrong if two pictures depicting very different objects or circumstances happen to look similar enough for the neural net.
And that’s why it is unexplainable — it doesn’t know! In fact, a human relying on his neural net is acting exactly like a deep learning AI. Talking to such a person is no different than talking to a chatbot.
Being clueless is what makes neural net 100% irrational even when it is guessing 100% on the point. And again, that’s what many (most?) of us are.
The actual understanding (a. k. a. knowledge) is always the product of a person’s rational mind. Instead of guessing superficially, it relies on mental models to effectively create and run a simulation of the real world. That’s how we “know” things — by developing or learning a proper mental model and running it to predict the real-world outcomes. As a fundamental capability, we all have it. The difference between people is in the quantity and quality of their mental models.
Ideally, you want your simulation to provide complete coverage of the real world. It doesn’t have to be very deep/detailed (cause won’t fit ;). But it’d be nice that it has no major holes, so we can see the whole picture.
Unfortunately, few ppl have that. Most simply have too few models. A Ph.D. would have tons, but they all represent some specialized knowledge. In other words, they have a very detailed simulation of some aspect of the world but might be very much missing the rest.
I have more details in this article.