A Year in Review: AI in 2019

Source: Deep Learning on Medium

A Year in Review: AI in 2019

2019 witnessed a surge in Algorithms, Data, Investments, Research papers, and much more. It was a fast-paced year with advances in research and development of new methods, services and frameworks. Here are some milestones in various areas of Artificial Intelligence in 2019.

1. General Machine Learning

For simplicity, I generalize Machine Learning to this equation, this is more or less fitting many cases,

Machine Learning = Data + Algorithm + Human in the loop

Let us look at developments in 2019 from this angle,

a) On the Data side,

As data is the new oil, and in many cases, we encounter a shortage of data to train our models. To tackle this problem Synthetic data generation algorithms like Generative Adversarial Networks and Stats models have enabled us to generate needed data(in numeric, categorical, text, speech, images or videos format) in cases where there is shortage of needed data to train our machine learning models.

b) On the algorithmic side,

Machine Learning algorithms implementation and operationalizing has scaled up, at this point we have amazing plug and play frameworks like numpy, pandas and scikit-learn.

Artificial Neural Networks are a part of Machine Learning that is inspired by, amazingly enough, biological neural networks (So we were inspired by ourselves basically!!) But luckily one of the major challenges in creating Artificial Neural Networks is choosing the right framework for them, as of now in 2019 Data scientists are spoiled for choice among various options like PyTorch, Microsoft Cognitive Toolkit, Apache MXNet, TensorFlow, etc. But the problem is that once a Neural Network is trained and evaluated on a particular framework, it is extremely difficult to port this on a different framework. This somewhat diminishes the far-reaching capabilities of Machine Learning. So to handle this problem, AWS, Facebook and Microsoft have collaborated to create the Open Neural Network Exchange (ONNX), which allows for the reuse of trained neural network models across multiple frameworks. Now ONNX will become an essential technology that will lead to increased interoperability among Neural Networks.

c) Human in the loop

To facilitate engagement of human in the loop, Data labeling services and tools were made by companies like Amazon, Hironsan to create tools like Amazon Mechanical Turk(marketplace to make it easier for you to build the labeled data) and Doccano which assists in labeling data.

2. Natural Language Processing

Natural Language Processing (aka NLP) is a field of computer science, Artificial Intelligence focused on the ability of the machines to comprehend language and interpret messages.

When it comes to NLP in 2019, Transformer models have been stealing the show as new models have been coming up almost every month during 2019, such as BERT, XLNet, RoBERTa, ERNIE, XLM-R, and ALBERT.

In 2017 Google released the Transformer model that broke state-of-the-art results for machine translation. It transformed how deep learning works by introducing self-attention as an alternative to the otherwise popular recurrent and convolutional neural networks. In 2018, the BERT model was released building upon the Transformer architecture. That model broke state-of-the-art records for many different NLP tasks at such as text similarity, question answering and semantic search. Recently Chinese Baidu’s Ernie performed better than Google’s predecessor Bert in understanding language. First used for Chinese, now it works better even for English, that’s how the year ends!

Also worth noticing when it comes to text generation is one of the most talked-about models OpenAI’s GPT-2 was released. Open AI released the largest version of GPT-2(with 1.5B parameters). Go ahead test it yourself: https://talktotransformer.com/

3. Computer Vision

Computer vision is the ability for artificially intelligent systems to see things like a human would, and has been growing in popularity across all sectors during the last few years. The current state of computer vision technology is powered by deep learning algorithms that utilize a special kind of neural networks, Convolutional Neural Networks (CNNs), to make sense of images. This year there was a surge in computer vision solutions providing quality control in medical devices, food and drink, pharmaceuticals and the automotive industry. The state-of-the-art was pushed for Realtime segmentation of image, Point cloud segmentation, image generation and pose estimation.

Also, developments were made by google to open source datasets(Open images V5- dataset of ~9M images annotated with image-level labels, object bounding boxes, object segmentation masks, and visual relationships) and facebook to operationalize NLP and Computer vision together using modular frameworks called Pythia which does a good job in visual question answering tasks. Here is a glimpse of pythia at work,

Here is a Demo of Pythia up and running(by CloudCV) which you can try it out.

4. Reinforcement Learning

Reinforcement Learning is a type of Machine Learning paradigms in which a learning algorithm is trained not on preset data but rather based on a feedback system. These algorithms are touted as the future of Machine Learning as these eliminate the cost of collecting and cleaning the data. Reinforcement Learning has seen advances in industries like Finance, Cyber Security, Manufacturing and much more.

Here is a primer on Best deep reinforcement research in 2019

5. Autonomous Systems

The quest to automate mundane and repetitive Machine Learning tasks has been going on for a while which resulted in Tools like AutoML, Azure ML which can be used to train high-quality custom machine learning models at scale. For tips and recommendations check out my blog on building robust, scalable and automated ML systems.

Connected and automated vehicle (CAV) is a transformative technology that has great potential to change our daily life. CAV related research has been advanced significantly in recent years. Especially in inter-CAV communications, security of CAVs, intersection control for CAVs, collision-free navigation of CAVs and pedestrian detection and protection. Find more on the state-of-the-art here.

6. Ethical Artificial Intelligence

It was heartening to see the big companies putting emphasis on this side of AI. I want to direct your attention to the guidelines and principles released by a couple of these companies:

These all essentially talk about fairness in AI and when and where to draw the line. Also, significant progress has been made in these areas from various groups such as the EU Commission and the high-level expert group presenting and their Ethics guidelines for trustworthy AI results. As part of the European AI Alliance, I had the privilege of participating in the process of formulating these guidelines at the first European AI Alliance held in Brussels, Belgium on 26th June, 2019.

Here are my thoughts on 8 enablers for Europe’s Trustworthy AI.

As you have seen much has been happening in 2019, it is becoming an increasingly difficult task to keep up with all these developments, in case I missed something please let me know in the comments below, would be glad to hear and learn from your perspectives.

The coming decade and 2020 will push the boundaries further, embrace yourself to indulge in lots of learning and change. Hopefully, AI for Good.

Cheers to 2020, I wish you a Happy and successful New Year!