Self Driving Cars and AI

Original article was published on Artificial Intelligence on Medium


A couple years ago, I took an online self-driving car morality quiz. It was a quiz that asked several ethical questions about self-driving cars that manufacturers may have to ask while developing the car’s AI. For example, if there’s a pedestrian that walks in front of the car, should the car swerve to avoid them, knowing that it will crash into a building or telephone pole to avoid the pedestrian? I eventually came to the conclusion through that quiz that self driving cars should maximize the safety of those outside the vehicle, because while the passengers of a self driving car may have airbags, seat-belts, and a host of safety features built into their car, pedestrians do not. Furthermore, I would nowadays argue that the risk of a self-driving car should be placed more onto those who choose to ride in those self-driving cars than those who haven’t. Back then, I thought I’d pretty much figured out the morality self-driving cars, and any self-driving or autonomous vehicle (after all, these same questions could and should be asked about self-driving bikes or scooters.) In reality there’s so much more we have to ask ourselves about AI as a whole and the rippling effects autonomous vehicles pose to humans, questions even more important than even the immanent safety concerns of those directly involved in the use of an autonomous vehicle.

None of this is to say that safety concerns aren’t really, really important to discuss, after all as carsurance reports, “there are 9.1 self-driving car accidents per million miles driven, while the same rate is 4.1 crashes per million miles for regular vehicles,” (Vardham, carsurance) which is a pretty serious problem. Safety concerns need to be at the forefront of autonomous vehicles, but we know they will get better as technology improves and innovation progresses. Statistics tell us that not only is the autonomous vehicle industry growing by 16% globally each year, but that Waymo, one of the biggest autonomous vehicle manufacturers, has made rapid progress, only requiring human test drivers to take control of the self-driving cars an average of once every 11,018 miles in 2018, twice the distance from 2017. (Vardham, carsurance) Because we already know passenger, driver and pedestrian safety will one day no longer be a problem, the biggest problems come from what we don’t know will be fixed, don’t know how to fix, and even don’t understand altogether.

Firstly, I’d like to share one of the more obvious questions that relate directly to autonomous vehicles, what defines an autonomous vehicle? The question may seem easy to answer, but when it gets to specifics, there’s no real definitive answer. One of my own first thoughts would be “well, any car that uses artificial intelligence (AI) to drive itself is autonomous.” But there’s several different kinds of AI, and if we let any program drive the car then safety will be blown entirely out the window. It’s also important to ask and answer because if a self-driving car were to crash, who do we place the blame on? If I’m the passenger to a self-driving car, is it my fault the car crashed, or is it the manufacturer’s fault? Now it becomes important to define at what point an AI transforms a vehicle into an autonomous vehicle. Many new cars today have a cruise mode, where the car can automatically accelerate to keep the same speed you were driving at, and in some more advanced cars, steer in a straight line or even follow road lines. This is mostly a convenience feature for those driving long road trips on open highways, as drivers will be able to save themselves the strain of keeping the gas pedal held down just enough to keep the car moving at the speed limit. But if we consider any program that can perform any task related to driving as an AI that turns a vehicle into an autonomous vehicle, then I could turn cruise control on, not pay attention to the road, and if I crash, blame the car manufacturer for having a faulty self-driving car. So what level of AI should we use as the standard of autonomous vehicles?

As IBM tells us, there’s four big categories of AI, only three of which currently exist. Artificial intelligence is any program that can perform a specific task. (What is Artificial Intelligence (AI)?, IBM) For example, a calculator is an AI, because it’s a program that can answer math related questions. Within AI there’s a subset for any program that’s able to learn, even being able to write its own code to improve itself. This is called machine learning, and it will most often need some form of human intervention to tell the machine what its done right, what its done wrong, and how to improve itself. (What is Artificial Intelligence (AI)?, IBM) It also only has one hidden layer of code, where the program will process the inputs it’s given and decide on a given output. (What is Artificial Intelligence (AI)?, IBM) Imagine a calculator that doesn’t know the answer to math problems, but will guess, being told by humans whether its guess was closer or further away from the answer, until it gets the answer right, and remembers what the right answers were. Eventually, the machine learning AI will function like a normal calculator. The next form of AI is deep learning AI, which is like machine learning AI, except that it doesn’t require any help from humans and can learn on its own from multiple hidden layers that process the original inputs and can tell for itself how much it’s learning and improving. (What is Artificial Intelligence (AI)?, IBM) Now imagine a calculator that can ask itself math questions, guess and check the answers to those questions with no outside help, and program itself to understand how to solve math equations. These are all forms of AI that currently exist, though their application is much broader than just calculators we already know how to program ourselves. There’s one more form of AI that we’ve yet to create, which is Artificial General Intelligence, or Artificial Super Intelligence, which is an AI that can process and interpret anything, and even choose what it wants to process, without human intervention thus replicating the same level of intelligence as a human. (What is Artificial Intelligence (AI)?, IBM) But where should we draw the line on what form of AI makes an autonomous vehicle, and is it even moral to choose?

As I stated before, not defining what level of AI programming makes an autonomous vehicle an autonomous vehicle opens the floodgates to a whole host of legal and safety issues. Not only that, but it also allows anyone with the ability to write code to manufacture, drive, and sell an autonomous vehicle. While safety has been quite possibly the biggest factor for self-driving car manufacturers to worry about (because who would want to buy or even ride in an unsafe car,) the same can’t be said for everyone in the world, and even placing a homemade autonomous vehicle onto the streets with no intentions of profit could spell catastrophic levels of danger for everyone, everywhere. Self-driving cars are already on our streets, but we really need to worry about what we allow to be used in any form autonomous vehicle, including the self-driving cars that have already been deployed. So again, where do we draw the line? Should we allow any car with machine learning AI to be driven, or should we wait until the program has spent at least 1,000 hours in testing and learning to be deployed? Should we develop some sort of test a program needs to pass before being allowed on our streets, and if so, what defines passing the test? Should the AI be able to improve itself after the test, or would hat only risk the program getting worse? Should we only allow deep learning programs that improve themselves to drive or should only programs that have humans determining their success be used? Should we wait for an ASI to be developed to drive our cars, or should we never use an ASI at all? To work through these questions and even understand why they’re important to ask, I should talk about human intelligence as well as artificial.

I’ve defined Artificial Intelligence here (or at least IBM’s definition,) but I haven’t said anything about intelligence by itself. When talking about AI, most will define intelligence in a way that includes both AI and human brains. Sam Harris defined intelligence as “a matter of information processing in physical systems,” (Harris, TED Summit) which includes our brains as the physical atoms that make up our brains process the information from our various senses (sight, sound, time,) but also includes Artificial Intelligence. While AI may seem as something only existing in a virtual world, like the internet, our computers and the physical circuitry inside those computers are what make up our programs, making AI very much a physical, tangible thing. So what should we teach a potential ASI? Thinking back to one of the most foundational ethical problems in history — The Trolley Dilemma — should we teach them anything at all? For those who don’t know, The Trolley Dilemma goes as follows, a trolley is about to run over four people tied to the track, but there’s only one person tied to another track that you can switch the trolley to run over, what do you do? (Phillipa Foot) If we create a program that can interpret any information and build anything it wants, and then teach that program that it’s better to run one person over than to let four people be run over, what will it do on a world-wide scale? This question is something that scares many about AI, or ASI; that we will be overrun and even become extinct by a super intelligence of our own creation. The concern isn’t necessarily an all out inevitable war between humans and an ASI that can create robot soldiers stronger and smarter than us, but that we will develop a program “so much more competent than we are that the slightest divergence between their goals and our own could destroy us.” (Harris, Ted Summit) When we consider real world ethical problems like the Trolley Dilemma, what will happen if that ASI thinks that the death of a few humans will save the rest of humanity? Would it then be wrong to stop that ASI from killing 49% of humanity in order to keep 51% of us alive and better off, if doing nothing would kill even more?

These questions about the morality of ASI may seem unnecessary to some, especially if they don’t believe that a true ASI capable of interpreting as much information as a human can is possible. The thing is, ASI is not only a possibility, it’s inevitable, and there’s no option to stop working on creating better and better AI either. Humans will continue to improve AI because we want to and need to. AI is ingrained into our culture, we already have programs that recommend us movies on Netflix and programs that tell us what stocks to trade. Businesses are especially dependent on AI to function and turn a profit, Facebook uses AI to collect our data, Snapchat uses AI to recognize faces, and Amazon uses AI for vocal recognition in their home products. Furthermore, we use AI to predict the affects of climate change, warn us about potential hurricanes, and develop cures to diseases. The use of AI in science is critical to our survival as a species, and if we don’t increase our technological capabilities, we don’t even have to question if humanity will one day be extinct, “merely whether we’re going to be taken out by the next killer asteroid, supervolcano, or some other problem that better technology could have solved.” (Tegmark, TED) So we know that we’re going to improve the capabilities of AI, but how do we know that an ASI will be smarter than us or uncontrollable? We already know that an AI can be smarter than humans in the specific task they’re programmed for. We’ve created chess computers that have reviewed hundreds of thousands of games played by humans and memorized the best move to play in any scenario, and it can beat the human world champions at chess. The same can’t be said for every board game, in Go that same strategy of AI learning wasn’t able to beat the best players, however, “Google DeepMind’s AlphaZero AI took 3,000 years of human Go games… ignored it all and became the world’s player by playing against itself.” (Tegmark, TED) Not only have we created AI that can use all of our knowledge to perform a specific task better than us, but we’ve created AI that can learn by itself better than with us. Currently, there’s no program that can interpret all the information the human brain can. However, we know that if we could create a program capable of interpreting information the same way a human can, it would be smarter than us on creation by nature of efficiency. “Electronic circuits function about a million times faster than biochemical ones,” (Harris, TED Summit) meaning that a program with the same level of intelligence as a human would make decisions and learn a million times faster than that same human ever could, which I would see as making the program much smarter than any human.

Knowing that we will one day create a super intelligence by our need to create more intelligent AI leads us to question what we should do with it. Should we let robots take over jobs that they can do better than humans? If ASI can do everything better than humans, should we let it have all of our jobs? What should we do for the unemployed? Should we let ASI run our government or start wars? How should we distribute our wealth if a program does everything for us, and would wealth mean anything? The same questions raised by what we should do with ASI are the same questions we need to ask about autonomous vehicles: what should we do with those cars once we’ve decided how to make them as safe as possible, either by letting AI develop its own safety measures or by making the AI learn our own safety measures? As Nico Larco said at a recent TEDx talk, “[Autonomous Vehicles] are not a transportation issue.” (Larco, TEDx) But if not, what are they? Autonomous vehicles pose many more questions than any regarding physical safety. For example, what should we do about the taxi drivers who will be out of the job by autonomous vehicles? Who should we let ride in autonomous vehicles? If autonomous vehicles enter the mainstream, should they be privately owned or a public service? Those last two questions are very important, much like how we should think about wealth in a society without jobs. If there are no privately owned cars, and cars are always driving, then we won’t need the parking lots taking up space where “our cars are parked 95% of the time,” (Larco, TEDx) because we could have self-driving cars always driving people around and use that parking space for any number of public goods. Furthermore, transportation is not available to everyone, particularly the disabled, which “is where the true value of autonomous vehicles and Artificial Intelligence lay,” (LaBruna, TEDx) because although we don’t have that all inclusive society today, AI provides a real possibility for that to happen.

Ultimately, we need to be concerned for the immediate safety of everyone around us until AI is able to be concerned for us. But more importantly, we need to be concerned about what we intend to do with AI and ASI, who gets to reap the benefits, who gets to decide on its values and most of all what those values to strive for are, because these are the real questions AI asks, the real questions autonomous vehicles asks, and the very same questions I was never asked when I solved the ethics of self-driving cars.