Three Practical Ways to Scale Machine Learning in the Real World

Source: Deep Learning on Medium

Three Practical Ways to Scale Machine Learning in the Real World

As NeurIPS sent the AI world a sobering message, the robotics industry seems to have a more pragmatic take on scaling machine learning solutions.

NeurIPS (Neural Information Processing Systems Conference), just ended with a record-high number of attendees that even a lottery system could barely accommodate. 9,000 tickets were sold out in 12 minutes, showing exploding interests to AI from all over the world. However, while AI innovations start to happen not just in academia but also in industry, most companies still struggle to identify high-value use cases and scale AI in the real world.

The company I work for provides machine learning software to enable more autonomous and dexterous robots in factories and warehouses. To bridge the gap, I work closely with robotics companies and system integrators to productize frontier machine learning research.

Last week I flew to the other side of the world. The International Robot Exhibition (IREX), the world’s largest robotics event, took place in Tokyo. Here, leading robotics companies demonstrated various ways of applying AI and ML in robotics.

AI enables a move away from automation (hard-programmed) to true autonomy (self-directed)

Previously I talked about how AI launched the next generation of Robotics 2.0 era. In the past, robots were used mostly in mass production lines. They are programmed to perform the same tasks with high precision and speed but couldn’t react to changes or surprises.

However, as consumer demands for customization grow and the labor force continues to shrink, we need more autonomous and flexible robots. Therefore, companies started to experiment with ways to incorporate ML into robotics, enabling robots to handle a wide range of tasks that traditional robots couldn’t. At IREX, ML is used in improving robot vision and control as well as scalability for real-world use cases.

Robot vision: breakthroughs in recognition, scalability, and self-learning.

Even the most advanced 3D structured light cameras have difficulties in identifying objects with transparent packages, reflective or dark surfaces because the light will scatter away or be absorbed.

Jumbled scene poses even more challenges as items overlap with each other. That’s why shaker tables and parts feeders are widely used in manufacturing.

In addition, traditional machine vision systems were not flexible enough: You need to register objects by uploading their CAD models beforehand for pattern matching. Even if there’s a small change in the process, you would have to re-register the objects and reprogram the robots.

But now, with recent progress in areas including deep learning, semantic segmentation, and scene understanding, we can recognize transparent and reflective packaging with commodity cameras.

At FANUC’s booth, a bin-picking LR Mate 200iD robot picks up air joints with deep learning algorithms and a 3D sensor. These same parts are randomly placed in a bin. Fanuc claims that no pre-registration is required as the system performs point cloud blob matching in real-time.

Right next to Fanuc’s booth, Kawasaki Heavy Industries (KHI) showcased similar bin picking solutions leveraging ML technologies from two startups, Photoneo and Ascent. On the other side of the booth, KHI worked with Dexterity to demonstrate an automated depalletizing robot solution. The robot can handle various sizes of boxes at the same time.

In another hall, along with DensoWave, OSARO, a San Francisco based ML startup debuted its “Orientation” feature: the robot can pick transparent bottles from a jumbled bin, identify not just the optimal pick points, but also the orientation of the objects (top or bottom of the bottles), and place the bottles vertically on a conveyor belt.

Yosuke Sawada, General Manager of the Robot Business Division at Denso Wave, commented: “OSARO’s newly developed ‘Orientation’ feature is one of the technologies customers have been waiting for. This exciting new feature helps identify challenging items and improve picking rates for warehouse operators and factory automators.”

The bottles are completely transparent and therefore difficult to recognize with traditional machine vision sensors. This is the first time ever such kind of feature is demonstrated. It allows AI-enabled robots to be used not just for pick and place but also for kitting, packaging, machine loading, and assembly.

source: Bastiane Huang

Robot control: intelligent placement and SKU-based handling

As humans, we have learned to pick up and put down various items from birth. We can complete these tasks instinctively. But machines do not have such experience and must learn these tasks. In particular, being able to arrange items neatly is especially important in markets like Japan. Products need to be carefully arranged and packaged without any damage.

Leveraging ML, robots can now judge depth more accurately. Models can also learn through training and automatically determine the orientation and shape of an object, for example, if a cup is facing upwards or downwards or is in some other state.

Object modeling or voxelization can be used to predict and reconstruct 3D objects. They enable the machine to more accurately render the size and shape of the actual item and place the item in the required position more accurately.

This makes SKU-based handling possible: the robot can automatically determine to place the item gently or drop it quickly on a per SKU basis. We can, therefore, optimize the throughput of the system without potentially damaging any fragile objects.

Companies have also started to experiment with reinforcement learning or machine learning in motion planning. As IREX, Acsent showed a video of using reinforcement learning to put two parts together. Yaskawa, a leading robotics company, also talked about the potential benefit of using machine learning in path planning in a presentation.

However, none of the above machine learning advancements can be deployed in real life if they require a large amount of training data and long training time. It’s challenging and expensive to get training data in real-world applications like robotics and self-driving cars. This is why I am especially excited to see that data efficiency gets mentioned at IREX.

Scalability for the real world: data efficiency

In one of the presentations at IREX, AI Cube Inc. (“AI³“), a new company that Yaskawa established last year to develop AI solutions for manufacturing and industrial robots, introduced Alliom, a tool to help companies digitalize the process of building machine learning models.

According to AI Cube, Alliom offers a simulation environment and can do data augmentation and generate synthetic data that looks similar to real-life objects. The company has used Alliom to accelerate the training process of ML models for random bin picking and hopes to expand the solution into various applications.

This shows that the robotics industry has gone beyond just experimenting with fancy ML algorithms and started to think about actually applying ML-enabled robots in the field. ML solutions need to not just work but also be able to scale across a variety of use cases efficiently. Otherwise, customers won’t be incentivized to deploy the system in their facilities.

I mentioned in my previous article that robotics companies are facing the “dilemma of innovation”: they realized the urgent need for innovation but they still need to take care of their core business — automotive and manufacturing companies who require high-speed and high-precision work. This goes against the needs of other segments for flexibility, dexterity, and the ability for robots to learn to identify and handle various components.

At IREX, several robotics giants exhibited with startups: to name a few, KHI with Photoneo, Dexterity, and Ascent; DensoWave with OSARO; Fanuc with Preferred Networks. Robotics companies are changing to embrace AI. But are they reacting fast enough to the changes that ML brings to the robotics industry?

In the automotive industry, we saw car OEMs struggled to compete with new entrants such as Tesla and Waymo in the transition to autonomous driving. So far, we haven’t seen any tech giant entering the robotics industry. However, Google, DeepMind, and Facebook have been working on robotics-related machine learning research for a while.

It’d be interesting to watch how AI would disrupt and reshuffle the robotics industry. Among tech giants, robot makers, manufacturers, and AI startups, who will cement a position in the era of AI-defined robotics?

Click here if you want to see more articles like this!