Your Guide to AI for Self-Driving Cars in 2020

Source: Deep Learning on Medium

Your Guide to AI for Self-Driving Cars in 2020

Self-driving cars, also referred to as autonomous cars, are cars which are capable of driving with little to no human input. A fully self-driving car would be able to drive you from Los Angeles to New York City all on its own while you sit back, relax, and enjoy the smooth ride.

Self-driving cars have been receiving tremendous attention as of late, in large part due to the technological boom of Artificial Intelligence (AI). In just the past few years, AI has gone from being nearly forgotten to becoming the largest Research and Development (R&D) investment of many organizations across the world.

Put simply, AI has given us the ability to automate a lot of manual work that would previously have taken some form of human knowledge or skill. In the case of self-driving cars, AI can help with being the brains of the cars doing things like automatically detecting people and other cars around the vehicle, staying in the lane, switching lanes, and following the GPS to get to the final destination.

So how does all of this work? How are scientists, engineers, and software developers able to program computers to make them drive cars?

An Overview of Self-Driving Technology

Levels of autonomy

When talking about self-driving cars, most technical experts will refer to levels of autonomy. The level of autonomy of a self-driving car refers to how much of the driving is done by a computer versus a human. The higher the level, the more of the driving that is done by a computer. Check out the graphic down below for an illustration.

  • Level 0: All functionality and systems of the car are controlled by humans
  • Level 1: Minor things like cruise control, automatic braking, or detecting something in the blind spot may be controlled by the computer, one at a time
  • Level 2: The computer can perform at least two simultaneous automated functions, such as acceleration and steering. A human is still required for safe operation and emergency procedures
  • Level 3: The computer can control all critical operations of the car simultaneously including accelerating, steering, stopping, navigation and parking under most conditions. A human driver is still expected to be present in case they are alerted of an emergency
  • Level 4: The car is fully-autonomous, without any need at all for a human driver, in some driving scenarios. For example, the car can fully drive itself when it’s sunny or cloudy, but not when it’s snowing and the lanes are covered
  • Level 5: The car is completely capable of self-driving in every situation

Most self-driving cars that we hear about in the news today such as those made by Tesla and Waymo are at level 2. They’re at the level where they can drive fairly well on their own, but a human driver is still required to ensure safe operation of the vehicle.

The stages of self-driving

The self-driving cars of today use a combination of various cutting-edge hardware and software technologies to perform their driving. A typical self-driving system will go through 3 stages to perform its driving. For the purposes of this article we’ll call these sensing, understanding, and control.

In the sensing stage, cameras and various sensors are used to see any objects that are around the car such as other cars, humans, bicycles, and animals. It’s really the eyes of the car, constantly seeing everything around it, 360 degrees.

In the understanding stage, various AI algorithms, mainly Computer Vision, are used to process the information from the sensors. For example, we might have one computer vision system that processes the video coming from the cameras around the car, to detect all of the other cars on the road around it. Such a system would ideally be able to detect where the cars are, how big they are, and how fast and which way they’re moving. In reality, these systems are designed to map out the entire environment around the car. All of this information will be fed into the control stage of the self-driving.

In the control stage, the self-driving system will process all of the information that the computer vision system was able to extract. Based on that information it will control the car. By knowing about all of the surrounding environment, what’s around the car and how it’s changing, the job of the control system is to move the car safely and towards its destination. It activates the breaks if the car in front is slowing down, switches lanes if it needs to exit, and turns on the wipers if it’s raining.

We’ll look at each of these stages in more detail.

(1) Sensors

When we humans are driving, we use our eyes to see what’s around us. A self-driving car also needs eyes to see. The eyes of the self-driving car are its various sensors. Most self-driving cars are using one or some combination of 3 different sensors: cameras, radar, and LiDAR.

Cameras

Cameras are the most similar to our own eyesight. They’re capturing continuous pictures, i.e videos through their lenses just like we do. And just like our own eyesight, it helps a lot with driving if a car’s cameras can capture high-quality videos — high resolution and high FPS.

Self-driving cars will have cameras placed on every side: front, back, left and right and more to be able to see everything around them, a full 360 degrees. Sometimes, a mix of different types of cameras will be used — some wide-angle to have a wider field of view, and some narrow but high resolution to see further.

The advantage of using cameras is that they’re the most natural visual representation of the world. A car is seeing exactly what a human driver would see — and more since its internal computer can look through all of the cameras at once. Cameras are also very inexpensive.

The downside is that the data being captured by a camera, which is images and videos, don’t give us much sense of how far other objects are from the car or how fast they’re moving. Cameras are also difficult to use at night time since we simply can’t see as much.

Radar

Radar has been traditionally used to detecting moving objects like aircraft and weather formations. It works by transmitting radio waves in bursts or pulses. Once those waves hit an object, they bounce right back to the sensor, giving data on the speed and location of the object.

In self-driving cars, radar is used to detect the speed and distance of various objects around the car. It’s a perfect complement to the cameras, which can see what the objects are but not precisely where (how far away) they are. And just like the cameras, the radar will be used in 360 degrees around the car.

Radar also supplement cameras in conditions where there is low light, such as night-time driving. Since radar is beaming out a signal, it really doesn’t matter if it’s 3AM or noon-time, the signals move and bounce back in the exact same way. This is in contrast to cameras which really don’t work as well at night because of the lighting.

The drawback of radar is that the technology is currently limited in its accuracy. Current radar sensors offer a very limited resolution. So radar does give us an idea of distance, location, and speed of other objects, but that idea is somewhat blurry — not as accurate as we’d like it to be.

LiDAR

LiDAR stands for Light Detection and Ranging. It works by sending out beams of light and then calculating how long it takes for the light to hit an object and reflect back to the LiDAR scanner. The distance to the object can then be calculated using the speed of light — these are known as Time of Flight measurements.

A Waymo car with LiDAR sensor on top. Source

LiDAR sensors are typically placed on the top of the car, firing thousands of light beams per second. Based on the collected data, a 3D representation called a point cloud can be created to represent the environment around the car.

The big advantage of LiDAR sensors is their accuracy. A good LiDAR sensor can identify details of only a few centimetres from objects that are 100 meters away. For example, it is said that Waymo’s LiDAR system can even detect which direction a person is walking, based on their accurate 3D point cloud that comes from LiDAR.

The downside of LiDAR is the cost, which is currently far more expensive, event 10 times more, than cameras and radar.

(2) Computer Vision

The understanding stage of a self-driving system is the brain — it’s where most of the major processing takes place. In the understanding stage, the goal is to take all of the information that came from the sensors and interpret it. That interpretation is aimed at gathering useful information that can help to safely control the car. The information can be things like:

  • What are all the objects around me, where are they, and how are they moving? So our system might detect things like people, cars, and animals
  • Where am I? The system would determine where all the lanes are and if the car is perfectly in the correct lane, or where the car is relative to the other cars on the road (too close, blind spots, etc)

In the year 2019, this information is being acquired mostly through AI and more specifically using Deep Learning for Computer Vision. Large neural networks are trained for tasks like Image Classification, Object Detection, Scene Segmentation, and Driving Lane Detection. The networks are then optimised for the car’s computing unit to be able to handle the real-time speeds required for self-driving.

Semantic Segmentation for Self-Driving. Source

Keep in mind that a self-driving car may have data coming in from multiple different sources: cameras, radar, and LiDAR. So all of those regular Computer Vision tasks can be applied to each kind of sensor data, thereby gathering information about the environment around the car in a very comprehensive manner. This also creates a kind of redundancy — if one system fails, the other system still has a chance to make a detection.

The great thing about using Deep Learning for a lot of these tasks is the fact that the networks are trainable. The more data we give them, the better they get. Companies are leveraging this quite heavily — self-driving cars are being put on the roads with human drivers in them, where they can constantly collect new training data to improve themselves.

Computer Vision is really the meat of the self-driving car system. An ideal system will be able to accurately detect and quantify every single aspect of the car’s surrounding environment — moving objects, stationary objects, road signs, street lights — absolutely everything. All of that information is then used to decide how exactly the car should move next.

(3) Control

Once the Computer Vision system has processed the data from the sensors, the self-driving car now has all the information it needs to drive. The role of the control stage is to figure out how to best navigate the car based on the information extracted during the understanding stage.

The technical term for describing how a self-driving car navigates the road is path planning. The goal of path planning is to use the information captured by the Computer Vision system to safely direct the car to its destination while avoiding obstacles and following the rules of the road.

The car will have knowledge of its target destination based on the GPS — the data from the GPS contains the information for the long-range path. To move towards its target destination, the self-driving system will first “plan the path” i.e calculate the most optimal (read: shortest time taken) path to its target. That means deciding which roads to take and how fast to drive.

Once the optimal path has been determined, the next step for the system is to determine the best possible “next move”. This next move will again always be based on following the most optimal path to its target destination. The next move could be to accelerate, brake, to switch lanes, or any other regular driving move.

At the same time, any move it makes must follow the rules of the road and maintain the safety of the car’s passengers. If the Computer Vision has detected a red light up ahead, then the car should slow down or stop (depending on how far away it is).

All of these controls are sent directly to the car’s mechanical controls. If the car needs to switch lanes then a command to turn the wheel by a very specific amount is sent to the appropriate part of the car. If the car needs to brake, a command is sent to press on the breaks with exactly the precise amount of pressure needed, slowing down enough to follow the optimal path while maintaining safety and obeying the rules of the road.

This process of sensing, understanding, and controlling is repeated, as often and precisely as possible, until the car reaches its destination.

The Big Players in Self-Driving

Self-driving cars and more generally autonomous vehicles have the potential to become a multi-trillion dollar industry. With big opportunity comes big competition and there’s no shortage of that in this space. There are a few big players.

Tesla

Tesla, especially Elon Musk when he’s on TV, prides itself on the fact that their cars don’t use a LiDAR camera. Instead, they mainly rely on 8 standard cameras located around the car. They then train a multi-headed Convolutional Neural Network (CNN) to detect everything around the car and perform the navigation accordingly.

The real power of Tesla’s self-driving technology lies in its software. Updates to the models running in the car can quickly and easily be deployed to all Tesla cars across the world via a software update. Fast and easy software updates mean that Tesla cars are constantly improving without the need for the user to pay anything extra or pay attention at all.

Tesla also leverages its self-driving fleet for data collection. All Tesla cars which are equipped with the appropriate cameras are used for collecting new training data. All of that data is used to re-train the models and deploy them once again to the entire fleet. It’s an automated, iterative pipeline for continuous improvement of the autonomous system.

A rear camera on a Tesla car, used for self-driving. Source

Waymo

Waymo is a self-driving car company owned by Google. A big edge for Waymo is that they manufacture their own proprietary hardware for their cars. That includes the sensors (cameras, radar, and LiDAR) as well as custom chips for running Computer Vision inference. This allows Waymo cars to achieve the best hardware and software optimisations possible.

Typically, a big drawback of LiDAR is the cost. However, Waymo claims to have developed its own LiDAR sensor which is 90% cheaper than the competition. This sensor is also claimed to be able to detect objects 300 meters away. If any decent level of accuracy is maintained at that range, then it offers a huge advantage for Waymo cars over Telsa. Regular cameras simply can’t see that far away.

Waymo is also pursuing several partnerships with large automobile companies including Chrysler, Toyota, Lexus, and Jaguar.

Uber and Lyft

Uber and Lyft are both very popular ride-hailing companies and are in the perfect position to capitalise on self-driving cars. They’re building their own fleets of self-driving cars equipped with cameras, radar, and LiDAR — multiple LiDARs in the case of Lyft.

Unlike traditional car companies who have to sell entire vehicles (expensive) equipped with self-driving, Uber and Lyft’s target is to build a fleet of self-driving cars to be available for their ride-sharing service. This would effectively remove the need for the human driver, and therefore the cost. Anyone anywhere would be able to order a self-driving car, for a similar price as calling an Uber or Lyft. It becomes self-driving as a service.

How far away are we from Level 5 Self-Driving?

After seeing all of this futuristic technology and the big advancements being made, one begs to ask: how far away are we from full-on, Level 5 self-driving cars?

It depends.

Some believe that the technology is less than a few years away. Elon Musk claims that “A year from now, we’ll have over a million cars with full self-driving”. And that certainly may be possible.

AI is improving incredibly fast with billions of dollars of funding being poured into cutting edge research. Hardware for sensing and computation is improving, especially since companies are now investing in custom hardware for self-driving. So the technology definitely seems to be trending in the right direction.

If we look on the other side of the technical spectrum, things get a bit more complicated.

In order for self-driving cars to be accepted and used by people on a daily basis, they really have to work to perfection. We as humans are much more forgiving when another human makes a small error, but are aggressively critical when a computer or machine makes a mistake.

Computers are supposed to just work, without any mistakes. They’re machines after all, so the expectations are much higher. We often expect machines to be an order of magnitude or more accurate than ourselves. The bar is just a lot higher.

On top of that, we have the legal considerations. You can bet your bottom dollar that new laws and regulations will have to be put into place once self-driving fully hits the roads.

Who’s liable in the case of an accident? Should cars be able to drive faster now since they’re self-driving? Does a human have to be in the car at all times? These are all questions that must at some point be answered for self-driving cars to really hit the roads in full force at a level 5.

Overall, self-driving cars are a big plus for society. Less pollution, less traffic, more efficiency, and safer driving can all be expected when cars become self-driving. The technology is trending in the right direction and will hopefully bring in a bright, autonomous future.