NeurIPS 2019

Source: Deep Learning on Medium

This Looks Like That: Deep Learning for Interpretable Image Recognition : The paper introduces a new architecture ProtoPNet that has a Prototype layer. It acts as an interpretable layer while still achieving performance results at par with the SOTA deep learning models. [paper] [3min video] [slides]


What’s the point of the research if it isn’t reproducible? Yes, we have heard this being talked about quite often. It was visible how the research community and NeurIPS have responded to the claims. Reproducible is being taken seriously, atleast it has started to. NeurIPS, for the first time, has organized Reproducibility challenge, encouraging institutions to use the accepted papers via OpenReview. It was interesting to go through the “Reproducibility checklist”.


ML models are known to be unfair (so far). There can be racial biases, gender biases and other such biases percolating into the models leading to disastrous consequences

For starters, you’d be surprised to know :

NeurIPS 2019 witnessed lot of research in this domain. Few of interesting ones:

Invited Speakers

Celeste Kidd, talked about How to know on the opening day. It was the most well received talk. While highlighting the issue of sexual harassment in the wake of #metoo movement, her keynote speech struck a chord with everyone in the packed conference.

Yoshua Bengio, gave a visionary speech titled From System 1 Deep Learning to System 2 Deep Learning.While it’s great to see ML and DL advance rapidly, it was important to highlight it’s pitfalls, shortcomings and the improvements and adaptations needed for the future.

Cool Demonstrations

How can this Paper get in? — A game to advise researchers when writing for a top AI conference. Aim of the project is to build a Natural language classification + Explainable AI tool for analyzing the paper and suggesting changes to get it accepted into top conferences. For the purpose of demonstration, they took paper title as input instead of entire paper.

Bringing AI to the Command Line [Demo] — One of the coolest demos and ideas I found at NeurIPS 2019. It might give sleepless nights for software developers, engineers and CS folks in general. But still, CLAI (Command Line AI) is a powerful idea. The tool, developed by IBM Research AI folks leverages NLP and Reinforcement learning. Natural language support, In-situ support & troubleshooting, Intelligent automation are power-packed features for the tool. It would be interesting to see how it evolves.

Folks from Panasonic Beta Research Lab demonstrated Smart Home Appliances: Chat with your Fridge. User could ask questions about contents inside the fridge, number of items, freshness of items via Facebook Messenger interface. Despite being a demo, I was expecting real-world data and richer experience. But the DNN model was trained on specific set of images and the dataset was restrictive. Nonetheless, it was a great idea for demonstrating the power of CV+NLP and has a lot of potential and applicability.

One-on-one fitness training with an AI avatar : Millie, digital fitness trainer, interacts in real-time to observe and grade the user’s movements for speed, accuracy and form.

Robot-assisted hair brushing : Researchers from USC developed the robot that uses camera to develop a 3D map of back of person’s head and hair. It can be visualized as a point-cloud. It later creates planned paths and executes them.[Demo]

Learning Machines can Curl — Adaptive Deep Reinforcement Learning enables the robot Curly to win against human players in an icy world. Robot named Curly designed by a team hailing from Korea University and the Berlin Institute of Technology. Deep learning is defeating champions not just in games such as Go, Chess but now it makes a foray into Olympic sports. The sporting world is paused for a revolution.