Last week, Allen AI announced that they were starting a new project on common sense reasoning. This project would complement their portfolio of existing offerings including NLP libraries, machine reading, computer vision and scientific solvers. Few details on the project are currently available. What we know is that Paul Allen has invested $125 million (3 year tranche) to refill the coffers on other AI2 projects, and ignite this new pursuit.
If you’re not familiar with the task of common sense reasoning, the gist is teaching computers about stuff that humans take for granted: If you drop a glass on the concrete, it will likely break. The most likely color for a golden retriever is gold, and so on. It’s the kind of information that a child would know, but oddly our most intelligent computers don’t know.
We’ve been down this path.
We’ve been down this path before — and have only partial success to show. Famously, Doug Lenat attempted to create a massive knowledge base using human experts (axiom engineers), resulting in Cyc, and OpenCyc. A team at the MIT Media Lab built the Open Mind Common Sense project (ConceptNet), which harvested online data and imported knowledge sources. Other efforts crawled the Web and extracting key bits of data (dbpedia, YAGO, etc.) Ernest Davis, Gary Marcus and associates have tried a divide and conquer approach, looking at sub-domains such as spatial reasoning, physical reasoning, etc. In summary, plenty of brilliant people have attempted to crack this nut, yet here it remains, stubbornly mocking us. It’s a career-wrecker. Only a fool or a genius would chase this seemingly intractable problem. Luckily, both exist.
A bunch of us can now read the Web and extract truth propositions. It’s not too hard: we leverage Hearst patterns, or build our own rules to find assertions in text, and use statistics to give us a weighted confidence. We’re typically pulling the data from crowd sourced stuff like Wikipedia, and use use machine reading techniques to capture the proposition and the context. For example, at Legendary AI, we read a sentence like:
“The chef cooked the steak in the park on Tuesday with his son.”
and we extract an event:
E1 = (The chef, cook, the steak), Time= on Tuesday, Location= in the park, Accompanier= with his son
From this one sentence we identify some common sense items like ‘a chef can cook’, ‘a steak can be cooked’, ‘the chef has a son’, ‘cooking can occur in a park’, ‘a chef is a person’, etc. And, if you read enough sentences, you can capture some interesting statistics (e.g., how often does a chef cook in the park vs. in the kitchen).
Here’s the problem: we don’t need more of the same. If the goal is to grow the set of existing common sense propositions, then give the fine folks at Luminoso a chunk of the money, and let them extend ConceptNet. If the goal is more scalable inferences, hand the money to Pilosa and let them go crazy with distributed bitmap indexes. There’s plenty of room for incremental improvement, and those should be considered. However, there’s a need for innovation. By innovation, I mean the 16th century definition: a heretic who stirs the pot to such an extent as to be considered crazy, worth of excommunication or imprisonment. Go big or go home. This is the challenge.
Hints of Innovation.
It was good to see Allen AI mention that they might use a combination of computer vision and reading comprehension in the task. This would help with many of the space-time questions like object properties (size, color, etc.), their locations (in a toaster, on the bed, etc.) and potentially, the physics problems (momentum, gravity, etc.) That said, would the system merely extract logical propositions, or do something greater?
Common Sense Deep Learning.
We can wholly expect that the rally cry will be to take logical propositions (that look a whole lot like symbolic logic), and beat them senselessly (pun intended) until they work with this years’ favorite deep learning model. This begs the question, do you obtain common sense for the sake of common sense, or is it just input to another model?
Either way, we can expect that deep learning will be used to create better axiom extractors. As we try to understand context, cause/relationship, event order, motivation and other stubborn problems, our DL classifiers will let us dig deeper and with more precision than ever before.
Shank Shanks His Shank.
Will Shank (Roger, that is) continue to point out obvious stuff like “a bunch of logical propositions isn’t artificial intelligence”. To what extent will narratives be involved? Will common sense include life-cycles of events, and typical outcomes? Will it relate back to a goal, or some type of reasonable motivation? Or are we just solving “Elephants NOT fly”.
It’ll be exciting to see the details of the project emerge. How will they do common sense without word sense disambiguation? Or will we end up with assertions like (a cougar, Is-A, cradle robber), when you meant the wildcat.
And Please Join The Communist Party.
Here’s the last one. Yea — we spent the $125,000,000.00 and created a kick-ass common sense engine, but you can’t use it for anything other than looking at it. If this ends up with some communist license where it can only be used if we agree to give up all rights to ‘making money’, ‘providing business value’, and ‘solving actual human problems’, then you can keep it. The ABSOLUTE last thing we need is a “common sense teaser framework”. We’re not doing AI for the f@#!_ of it. Let’s change lives. A communist license on a project that was meant to “even the playing field with Google and Facebook” has the exact opposite effect.
Artificial common sense is a terrible problem to work on. It ends careers faster than working on a perpetual motion machine or cold fusion. But here we are. We need an innovator — a bold, fearless heretic — to be ridiculed, scorned and thrown into academic prison — and to not be okay with a life sentence.
Source: Deep Learning on Medium