Towards Safer Navigation of Indoor Robots

Source: Deep Learning on Medium

Towards Safer Navigation of Indoor Robots

The realm of robotics has witnessed a tremendous surge in terms of research, development and implementation of assistive machinery. From self-driving cars in practice to autonomous floor cleaning robots, intelligent machines have come a long way down. There are a couple of core technologies or research concepts that are applied in the above mentioned domains, with computer vision being one primary example.

Modern computer vision algorithms have made navigation really easy to achieve. Most of the techniques are found already implemented in open-source computer vision libraries like OpenCV, scikit-image, imgaug etc. These implementations make it very simple to incorporate complex algorithms into your system and implement an autonomous intelligent robot. With the given environment, we literally had to pick and choose the most appropriate algorithm for feature extraction and motion generation. With the advent of deep learning however, this need was also downsized.

Deep learning discards the usage of hand-crafted feature extraction techniques and uses deep convolutional neural networks (CNNs) for automatic feature extraction. For instance, semantic/instance segmentation algorithms learn pixel-pixel representations using the above CNNs thereby creating a full semantic understanding of the environment, and this is exactly what we implemented at Aziobot.

Aziobot is a robotics startup based in Den Bosch, Netherlands, focusing on developing and deploying autonomous floor cleaning machines to the industry. Our first product SB2, is already out for ground testing. The robot has multiple interesting features with intelligent cleaning and simple setup being the most fascinating. This robot has the capability to clean anywhere and anytime, even in the low ambient light. Furthermore, SB2 now has all the required safety features that should be incorporated in indoor robots. Let’s take a moment and see SB2 in action!

A glimpse of capabilities — Aziobot’s SB2

This was the demonstration of a feature in SB2 called autonomous spot cleaning, implying that the robot is capable of cleaning a particular spot, as indicated by the user in the selected map. SB2 is also capable of autonomous zonal cleaning, where the selected map can be distributed into multiple zones, based on the user’s cleaning requirements.

Another important facet of SB2 is the simplicity in the mapping process. A lot of cleaning robots are hard-coded in terms of motion planning and path following. SB2 however, on the other hand, provides a very simple UI for mapping the area. Post mapping, the motion profile is generated autonomously by SB2 and the cleaning process begins, all by itself. Sounds far fetched? Well not really, have a look below:

https://www.youtube.com/watch?v=4eOCBW_katg

So now that the robot is introduced, I would now like to talk about the safety facets of it. The primary safety layer is already a core part of Aziobot’s Navigation Stack, which provides for basic collision avoidance from nearby walls, doors and other local static objects. The secondary layer has a bit more functionality in terms of detection. This layer is basically an custom-trained instance segmentation algorithm running on the on-board computer called DeepLabV3, by Google.

This semantic segmentation algorithm has been trained to identify people, and the floor structure at the wheel base. Now we know that people are usually the dynamic obstacles that can possibly obstruct the cleaning routes of these robots, and in order to avoid colliding with people, this semantic segmentation algorithm lays the foundation.

Fig.2 On-board Segmentation Example

Once the people are detected in the frame as you can see above, their average distances and velocities are calculated and fed to navigation stack. This software stack utilizes this information to create a kind of no-go zone near the people. Furthermore, the algorithm can detect basic floors, wooden floors and carpet/rug type structures. As you know that scrubbing operations have to be different on different types of floors, this algorithm tells the robot when to stop water flow and when to stop scrubbing, depending on what kind of floor is encountered.

Also in the image above, the distinct pink color on the ground indicates the presence of a carpet/rug type structure where the scrubbing process is strictly forbidden. Furthermore, the segmented human being in the frame contributes to dynamic obstacles. This semantic understanding of the surroundings makes SB2 more intelligent and capable to handle multiple flooring techniques. As of now, further research is going on to distinguish between tiles/marble/wooden/carpet types to further enhance this capability.

This capability makes SB2 safe around people which is the most important factor in the modern era with respect to social acceptability of robots. Safety of the humans is of top priority and SB2 fulfills this required criteria. For any more information, please feel free to visit www.aziobot.com