Impressions and Lessons from the O’Reilly AI Conf 2018



AI superstar and the author’s personal hero, Peter Norvig, giving his keynote a the AI Conf 2018

I recently attended the O’Reilly Artificial Intelligence Conference 2018. I am writing this post to share my experiences, and some of the lessons I learned with friends and colleagues. Here are the slides of many of the presentations.

The main themes of the conference

AI for customer support

A big fraction of the companies that are actually employing AI or ML to put their data to work are doing it in the fields of customer support and customization, or user experience. Notable examples of this are:

  • Uber, who employed deep learning to develop natural language processing models to classify and suggest responses to customer tickets, as embodied in their Customer Obsession Ticket Assistant (COTA-v2).
  • AT&T, who developed an LSTM network (a type of recurrent neural network) to produce a model of the sequence of touch-points (channels) that a customer will follow when interacting with the company, as well as the outcome of those interactions (whether they bought a product or service and through what channel).
  • Blue Shield of California, who sees the chatbot they have put in place as an integral part of their omni-channel customer service strategy. They emphasized, however, that the most successful chatbots don’t work by themselves. They are always built on top of a good data infrastructure foundation that includes well developed APIs and micro-services.
https://www.theinquirer.net/inquirer/news/3032927/microsofts-xiaoice-bot-can-fool-humans-into-thinking-theyre-talking-to-a-person

Human-in-the-loop and the impact of AI on the workforce

There was a lot said about how AI shouldn’t (completely) replace humans but rather make their work more efficient, powerful and gratifying. Also, at least for the time being, humans are still instrumental in the development of AI solutions, due to the simple fact that most expert knowledge is contained in the brains of human experts, and they are the only ones capable of accurately doing the costly labeling of thousands of data points required to train ML algorithms.

The main two axes of human vs. AI replacement / collaboration / augmentation.

There are also intermediate approaches, such as active learning, based on the idea of collaboration between humans and machines in a semi-supervised way. That is, let machines handle the easy routine cases while routing the difficult/edge cases to human experts. Of course, this is practical in applications in which difficult cases not amenable to automated decisions are easy to identify and constitute a minority of all cases.

Deep learning and reinforcement learning are all the rage

It’s probably no news to most readers that deep learning has positioned itself as the de facto architecture for ML models. Perhaps due to the conference happening in California and the fact that Google was one of the conference sponsors, there were tons of presentations about TensorFlow. By the way, after the conference I talked to some good friends of mine who are actually DL researchers and they very much vouch for PyTorch as a nicer framework for running DL experiments.

Peter Norvig gave a wonderful keynote presentation about the amazing applications of DL to scientific problems that non-DL-experts have been able to pull-off in recent years. There has been everything from applications in astronomy (gravitational lenses, exoplanet detection), passing through applications in medicine (assessing cardiovascular risk factors with CV, high-school students identifying cancer, and going all the way to agriculture (identifying sick cassava plants with a DL-based app, tracking cows).

There was a very nice talk by two young Microsoft researchers ( Danielle Dean and Wee Hyong Tok) on the “best kept secret” in deep learning: “transfer learning.” In a nutshell, transfer learning is a technique to leverage the knowledge encoded in a deep network that has been carefully and painstakingly trained by experts on a large data set, to solve a different problem. To give an example, a deep network trained to solve an object classification task can be refurbished with relatively little work to solve another computer vision task, such as that of texture classification. This is achieved by using the network as a featurizer and only retraining the last layer on a possibly small data set.

The texture classification problem trained on a rather small dataset, but solvable via transfer learning!

The cloud is (pretty much) the only place to deploy AI/ML solutions

Given the complex hardware of software configurations required by most advanced ML and AI solutions, it should come as no surprise that cloud-based infrastructures are preponderant. 
One of the nicest contributions in this regard was the keynote given Levent Besik, from Google, in which he outlined the company’s efforts to “democratize AI and to make it easy and useful for all developers & users”, regardless of their degree of expertise in the technical or scientific aspects of AI. Google’s offering includes three levels of abstraction. At the lowest level, one finds platform (ml-engine, dataflow and dataproc cloud services) and library (TensorFlow, Keras, Spark) level, for developers who want to create solutions “from scratch.” Then come the AI building blocks, which are essentially mature APIs to solve well defined problems, such as language translation, speech transcription, or image recognition. At the highest level we can find template solutions such as product recommendation engine or a customer contact center.

Amazon Web Services (AWS) offers a set of services and platforms, for developing and deploying ML, along the same general lines. I got the feeling, however, that some components of AWS offering, such as Amazon Sagemaker are more mature, complete, and overall better designed than their Google counterparts.

The AWS ML Stack. Image reproduced with permission from AWS.

Auto-ML

If you are a data-scientist or AI solutions developer, there is a chance that you too will be replaced by a machine in the not too distant future. Auto-ML technologies promise to abstract and automate end-to-end the most complicated aspects of ML-model construction going from data-pre-processing and going all the way to hyper-parameter tuning, model selection and deployment. In a nutshell, an Auto-ML service takes as input a data-set (which might be dirty and doesn’t even have to include labels) and outputs an already deployed ML-model, complete with a REST-API and everything!

Google Cloud’s AutoML solution to generate image classification.

H2O is another company that showed interesting developments in this direction.

Given that I consider myself an ML practitioner (I abhor the term data-scientist), it shall come as no surprise to you that I am still very skeptical about Auto-ML technologies. It’s hard to imagine a fully automated process that manages to carry-out all the difficult experience- and intuition-based decisions involved in the creation of an ML model. Ingenious feature engineering, for instance, is a particularly tricky thing to automate. However, the recent advances in Auto-ML, as exhibited by Google, in which they have managed to develop neural nets that design other neural nets, make me doubt myself.

Quotes

To finish, I will leave you with some quotes that I found either interesting, illuminating, or provoking:

“Many of the AI things that we do have no business value beyond marketing” — Ben Taylor.

“Eight out of ten of all ML workloads run on AWS” — Hagay Lupesgo, AWS.

“[Out of] 20 million developers, 1 million are data scientists and 1000 deep-learning researchers” — Levent Besik, Google Cloud.

“Some companies have many ‘good ideas’ [for AI/ML use] but can’t quantify value. If the idea starts with ‘woulnd’t it be cool…?’, it’s probably a bad idea.” — Ben Taylor.

“82% of organizations are in some stage of considering adopting AI. It’s very easy to do pilots but very hard to deploy them… Executives are focusing on customer retention and satisfaction, customer acquisition cost reduction. ” — Manish Goyal, IBM

“Barriers to implement AI: Lack of skilled resources or technical expertise, Regulatory constraints. Legal, security, privacy concerns about the use of data and information.” — Manish Goyal, IBM

“In the future you are going to see affordable, intelligent, cloud-powered, personalized prosthetic devices” — Joseph Sirosh, Microsoft

“We’re moving into a world where machines and software can analyze (see patterns that were always hidden before); optimize (tell a plane which altitude to fly each mile to get the best fuel efficiency); prophesy (tell you when your elevator will break and fix it before it does); customize (tailor any product or service for you alone) and digitize and automate just about any job. This is transforming every industry.” — Tom Friedman ,Author of Earth is Flat, New York Times

“ All ML does is provide inferences. Scientific method to the business. Inferences are easy for business people to relate to… Data science doesn’t lend itself well to agile methodologies” — Carlos Escapa, AWS

The fourth waves of AI — Image taken from Sinovation ventures presentation.

Source: Deep Learning on Medium