AI and driving looking at the rear-view mirror.

Original article was published by Mario Alemi on Artificial Intelligence on Medium


AI and driving looking at the rear-view mirror.

In 2008, during the credit crunch crisis, we heard financial people saying “According to our model, something like that could happen only once in a billion years.” Clearly, the models had some problems.

Finance was mainly using so-called “value-at-risk” models to estimate the probability that an asset’s value could drop below a certain threshold. But, as already pointed out by the Benoit Mandelbrot in 1963 (The Variation of certain speculative prices), prices of assets are not so easy to predict.

Prices are like salaries –most of them are around a certain value, but some of them are shamefully high. You go to a meeting and normally suppose that no one is earning 1000 times as much as you do.

Nonetheless, even if you earn 1M a year, you know that the probability of meeting a billionaire is low, but not zero. The exact probability would be hard to compute, but you know that betting your life on that would be risky… you know that the fact that you never met a billionaire in a meeting, does not mean this is impossible.

Similarly, observed Mandelbrot, the probability of prices of cotton skyrocketing or collapse is not zero. Don’t bet your life on it.

Mind that those value-at-risk models are part of artificial intelligence. Value-at-risk are rational agents acting so to as to achieve –given observations and intrinsic uncertainties– the best expected outcomes.

Although wrong (but of course any model is proved wrong, sooner or later), value-at-risk was useful. It allowed financial institution all over the world to analyse huge amount of data and produce an estimate of what could have happened.

Because of the (relative) mathematical complexity, and the amount of data analysed, almost everyone relied blindly to value-at-risk. In 2008, the Basel Committee on Banking Supervision still accepted it for modelling prices.

All that made banks under-estimating the probability of default for all the assets in their portfolio, and we now how it ended up.

Enter now Deep Learning, a new branch of Artificial Intelligence. Instead of the prices of assets for the past 50 years, we have artificial neural-networks analysing written language, or videos, most of it produced during the past 20 years.

(Note that language shares many mathematical properties with prices and wages. Indeed, the same function was re-discovered many times to model wages (Pareto), frequency of words (Zipf) and prices (Mandelbrot).)

Neural networks, like value-at-risk, can hardly something really new. They merely look at the rear-view mirror, and, with a (relatively) complex mathematical model, try to predict the future.

But while humans know that the ways of Lord are infinite, a software –which learns just from the statistics of the data it crunched– does not. For it, the ways of Lord are limited to the ones it has travelled.

If an airplane lands on the highway, humans understand what’s happening even if it’s the first time they see something similar. A neural network doesn’t, unless it’s been fed with that example.

Neural networks can ape human knowledge, but they are unable to understand, much less predict, rare events –the black swans.

Human beings, particularly very smart ones, are able to learn from unexpected events. Alexander Fleming, the discoverer of penicillin, came back from his holidays and found a bacteria culture plate with an open lid. He observed that such plate had fewer bacteria than the others. He decided to investigate why the bacteria did not reproduce, understood the cause, fought (as usual) with the establishment, and many years later humanity had a powerful antibiotic.

Imagine a neural network doing Alexander Fleming’s job:

Found plate with open lid.

Trash plate

Goodbye penicillin.