Artificial brain gives robot the smarts for complex tasks

Original article was published by Futurity News on Artificial Intelligence on Medium


The robotic system has an artificial brain system that mimics biological neural networks and has integrated artificial skin and vision sensors. (Credit: NUS)

Artificial brain gives robot the smarts for complex tasks

Engineers have developed an artificial brain system with touch and vision sensors to make a smarter robot.

Picking up a can of soft drink may be a simple task for humans, but this is a complex task for robots. It has to locate the object, deduce its shape, determine the right amount of strength to use, and grasp the object without letting it slip.

Most of today’s robots operate solely based on visual processing, which limits their capabilities. In order to perform more complex tasks, robots need an exceptional sense of touch and the ability to process sensory information quickly and intelligently.

The new sensory integrated artificial brain system mimics biological neural networks, and can run on a power-efficient neuromorphic processor, such as Intel’s Loihi chip.

The system also integrates artificial skin and vision sensors, equipping robots with the ability to draw accurate conclusions about the objects they are grasping based on the data the vision and touch sensors capture in real-time.

Fast and accurate sensing

“The field of robotic manipulation has made great progress in recent years,” says Benjamin Tee, assistant professor in the materials science and engineering department at the National University of Singapore.

“However, fusing both vision and tactile information to provide a highly precise response in milliseconds remains a technology challenge. Our recent work combines our ultra-fast electronic skins and nervous systems with the latest innovations in vision sensing and AI for robots so that they can become smarter and more intuitive in physical interactions,” he says.

Enabling a human-like sense of touch in robotics could significantly improve current functionality, and even lead to new uses. For example, on the factory floor, robotic arms fitted with electronic skins could easily adapt to different items, using tactile sensing to identify and grip unfamiliar objects with the right amount of pressure to prevent slipping.

For the new robotic system, the researchers applied an advanced artificial skin called asynchronous coded electronic skin, which Tee and colleagues developed in 2019. The sensor detects touches more than 1,000 times faster than the human sensory nervous system and can also identify the shape, texture, and hardness of objects 10 times faster than the blink of an eye.

“Making an ultra-fast artificial skin sensor solves about half the puzzle of making robots smarter. They also need an artificial brain that can ultimately achieve perception and learning as another critical piece in the puzzle,” says Tee.

Another puzzle piece for smart robots

To break new ground in robotic perception, the researchers explored neuromorphic technology — an area of computing that emulates the neural structure and operation of the human brain — to process sensory data from the artificial skin.

As Tee and colleague Harold Soh, an assistant professor in the computer science department, are members of the Intel Neuromorphic Research Community, they say it was a natural choice to use Intel’s Loihi neuromorphic research chip for their new robotic system.

In the initial experiments, the researchers fitted a robotic hand with the artificial skin, and used it to read Braille, passing the tactile data to Loihi via the cloud to convert the micro bumps the hand felt into a semantic meaning. Loihi achieved over 92% accuracy in classifying the Braille letters, while using 20 times less power than a normal microprocessor.

Soh’s team combined both vision and touch data in a spiking neural network to improve the robot’s perception capabilities. In their experiments, the researchers tasked a robot equipped with both artificial skin and vision sensors to classify various opaque containers containing differing amounts of liquid. They also tested the system’s ability to identify rotational slip, which is important for stable grasping.

In both tests, the spiking neural network that used both vision and touch data was able to classify objects and detect object slippage. The classification was 10% more accurate than a system that used only vision. Moreover, using a technique Soh’s team developed, the neural networks could classify the sensory data while it was being accumulated, unlike the conventional approach where data is classified after it has been fully gathered.

In addition, the researchers demonstrated the efficiency of neuromorphic technology: Loihi processed the sensory data 21%t faster than a top performing graphics processing unit, while using more than 45 times less power.

“We’re excited by these results,” Soh says. “They show that a neuromorphic system is a promising piece of the puzzle for combining multiple sensors to improve robot perception. It’s a step towards building power-efficient and trustworthy robots that can respond quickly and appropriately in unexpected situations.”

Moving forward, Tee and Soh plan to further develop their novel robotic system for applications in the logistics and food manufacturing industries where there is a high demand for robotic automation, especially moving forward in the post-COVID era.

The researchers presented the findings at the Robotics: Science and Systems conference in July 2020.

The National Robotics R&D Programme Office funded the work.

Source: National University of Singapore

Original Study

Find more research news at Futurity.org