My head hurts, a lot.

My head hurts, a lot. Look, I know many of the people in the pattern recognition and Deep Learning fields are (beyond) brilliant. But most of this FUD about “automatic software” seems to be a giant case of misusing the human minds ability to extrapolate certain concepts, and in areas it extrapolates *very poorly*.

For a human to jump 1 inch high is trivial. To jump 1 foot high is still easy for most able-bodied people. To take those two data points and then say well, if we can make that kind of a massive increase between a 1 inch jump and a 1 foot jump we can imply that it’s only a matter of time before humans can jump 1000 feet *is insane*!

There are fundamental limitations in the structure of the human body that make that impossible. This parallels quite well to AI/Deep Learning. We’re already hearing complaints from *within the best minds of Deep Learning itself” that the current pattern recognition architectures, while extremely powerful for low level pattern recognition have inherent limitations that will never let them get close to AGI.

To think that just because we can start grouping and recognizing certain pixel patterns in a useful manner, or train an agent with reinforcement learning to optimize the action/reward choices within extremely narrow domains such as a video game or the operation of a military vehicle (air combat) is a gross misuse of imagination!

I could write pages on the massive problems involved with “growing” or “training with big data” an AI agent to write complex software, none of which we are closed to solving, but I’ll end with this instead. Here is a short, tragically incomplete list of some of the insanely tough internal mental models you would need to *master* (not just approximate) to emulate a veteran software designer:

  • Dynamic multi-pathway cost analysis of a huge number of software flow patterns with the ability to leave incomplete, during the design/build iteration cycle, huge chunks of the code in an intelligent manner that allows successful completion of the code base, instead of creating piles of junk code with interesting, interconnected logic patterns that *are contradictory or effectively useless*.
  • The ability to synthesize *proper, cogent, relevant* internal 3D dimensional models of interconnecting data structures and code elements, subject to asynchronous, intermittent failures that are also subject to and have to survive antagonistic attacks from cooperating hostile agents.
  • And my favorite, the frequent and crucial decision making process that is involved in creating code that services *humans*, violently difficult task that requires an unbelievably intricate knowledge of human emotion, physical hardware (eyes, hand, etc.), and frequently irrational behavior patterns that currently we are *light years away* from emulating, let alone replicating, in a relevant manner that creates useful code.

What we are witnessing is another great cycle of hype that will make many companies rich despite the fact the vast majority of those same companies will *fail* leaving the investors holding empty bags of promises instead of cash. The tragically few remainingsuccesses will *still not* create successful code generators, but instead, as usual, will come up with some niche market solutions that can perform a narrow useful service for some market.

My two cents, but that two cents has been the smart money on this subject for a *long* time.

Source: Deep Learning on Medium