When Will Autonomous Vehicles be Safe Enough? An interview with Professor Missy Cummings

September 25, 2018 | 2:02 pm
Photo: Jaguar MENA
Jeremy Martin
Senior Scientist and Director of Fuels Policy

Autonomous vehicle (AV) supporters often tout safety as one of the most significant benefits of an AV-dominated transportation future. As explained in our policy brief Maximizing the Benefits of Self-Driving Vehicles:

While self-driving vehicles have the potential to reduce vehicle-related fatalities, this is not a guaranteed outcome. Vehicle computer systems must be made secure from hacking, and rigorous testing and regulatory oversight of vehicle programming are essential to ensure that self-driving vehicles protect both their occupants and those outside the vehicle.

Professor Mary “Missy” Cummings, former fighter pilot and current director of the Humans and Autonomy Lab at Duke University, is an expert on automated systems. Dr. Cummings has researched and written extensively on the interactions between humans and unmanned vehicles, regulation of AVs, and potential risks of driverless cars. I had the opportunity to speak with Dr. Cummings and ask her a few questions about current technological limitations to AV safety and how to use regulation to ensure safety for all Americans, whether they are driving, walking, or biking.

Below are some key points from the interview, as well as links to some of Dr. Cummings’ work on the topics mentioned.


Jeremy Martin (JM): Safety is one of the biggest arguments we hear for moving forward with autonomous vehicle development. The U.S. National Highway Traffic Safety Administration has tied 94% of crashes to a human choice or error, so safety seems like a good motivating factor. In reality, how far are we really off from having autonomous systems that are safer and better than human drivers? And are there specific software limitations that we need to improve before we remove humans from behind the wheel?

Dr. Mary “Missy” Cummings (MC): I think one of the fallacies in thinking about driverless cars is that, even with all of the decisions that have to be made by humans in designing software code, somehow they are going to be free from human error just because there’s not a human driving. Yes, we would all like to get the human driver out from behind the wheel, but that doesn’t completely remove humans from the equation. I have an eleven-year-old, I would like to see driverless cars in place in five years so she’s not driving. But, as an educator and as a person who works inside these systems, we’re just not there.

We are still very error prone in the development of the software. So, what I’d like to see in terms of safety is for us to develop a series of tests and certifications that make us comfortable that the cars are at least going to be somewhat safer than human drivers. If we could get a reliable 10% improvement over humans, I would be good with that. I think the real issue right now, given the nature of autonomous systems, is that we really do not know how to define safety for these vehicles yet.

JM: So you’re not optimistic about meeting your five-year target?

MC: No, but it’s not a discrete yes or no answer. The reality is that we’re going to see more and more improvement. For example, automatic emergency breaking (AEB) is great, but it’s still actually a very new technology and there are still lots of issues that need to be addressed with it. AEB will get better over time. Lane detection and the car’s ability to see what’s happening and avoid accidents, as well as feature’s like Toyota’s guardian mode, will all get better over time.

When do I think that you will be able to use your cell phone to call a car, have it pick you up, jump in the backseat and have it take you to Vegas? We’re still a good 15-20 years from that.

JM: You mentioned that if AVs performed 10% better than human drivers, that’s a good place to start. Is that setting the bar too low? How do we set that threshold and then how do we raise the bar over time?

MC: I think we need to define that as a group of stakeholders and I actually don’t think we need a static set of standards like we’re used to.

With autonomous vehicles, it’s all software and not hardware, but we don’t certify drivers’ brains cell by cell, what we do is certify you by how you perform in an agreed-upon set of tests. We need to take that metaphor and apply it to driverless cars. We need to figure out how to do outcome-based testing that is flexible enough to adapt to new coding approaches.

So, a vision test, for example, in the early days of driverless cars should be a lot more stringent, because we have seen some deaths and we know that the sensors like lidar and radar have serious limitations. But, as those get addressed, I would be open to having less stringent testing. It’s almost like graduated licensing. I think teenagers should have to go through a lot more testing than me at 50. Over time, you gain trust in a system because you see how it operates. Another issue is that now cars can do over-the-air software updates. So, do cars need to be tested when a new model comes out or when they have a new software upgrade that comes out? I don’t claim to have all the answers, and I’ll tell you that nobody does right now.

JM: One safety concern that emerges in discussions around AVs is cybersecurity. What are the cybersecurity threats we should be worried about?

MC: There are two threats to cybersecurity that I’m concerned about, one is active hacking, and that would be how somebody hacks into your system and takes it over or degrades it in some way. The other concern is in the last year, there’s been a lot of research that’s shown how the convolution neural nets that power the vision systems for these cars can be passively hacked. By that I mean, you don’t mess with the car’s system itself, you mess with the environment. You can read more about this but, for example, you can modify a stop sign in a very small way and it can trick an algorithm to see a 45 mile per hour speed limit sign instead of a stop sign. That is a whole new threat to cybersecurity that is emerging in research settings and that, to my knowledge, no one is addressing in the companies. This is why, even though I’m not usually a huge fan of regulations, in this particular case I do think we need stronger regulatory action to make sure that we, both as a society and as an industry, are addressing what we know are going to be problems.

JM: We hear a lot about level 3 and 4 automation, where a human backup driver needs to be alert and ready to take over for the car in certain situations, and after that fatal accident in Arizona we know what the consequences can be if a backup driver gets bored or distracted. What kinds of solutions are there for keeping drivers off their phones in AVs? Or are we just going to be in a lot of trouble until we get to level 5 automation and we no longer need backup drivers?

MC: I wrote a paper on boredom and autonomous systems, and I’ve come to the conclusion that it’s pretty hopeless. I say that because humans are just wired for activity in the brain. So, if we’re bored or we don’t perceive that there’s enough going on in our world, we will make ourselves busy. That’s why cellphones are so bad in cars, because they provide the stimulation that your brain desires. But even if I were to take the phones away from people, what you’ll see is that humans are terrible at vigilance. It’s almost painful for us to sit and wait for something bad to happen in the absence of any other stimuli. Almost every driver has had a case where they’ve been so wrapped up in their thoughts that they’ve missed an exit, for example. Perception is really linked to what you’re doing inside your head, so just because your eyes are on the road doesn’t mean you’re going to see everything that’s in front of you.

JM: What’s our best solution moving forward when it comes to safety regulations for autonomous vehicles? Is it just a matter of updating the standards that we currently have for human-driven vehicles or do we need a whole new regulatory framework?

What we need is an entirely new regulatory framework where an agency like NHTSA would oversee the proceedings. They would bring together stakeholders like all the manufactures of the cars, the tier one suppliers, people who are doing the coding, as well as a smattering of academics who are in touch with the latest and greatest in related technologies such as machine learning and computer vision. But we don’t just need something new for driverless cars, we also need it for drones, and even medical technology. I wrote a paper about moving forward in society with autonomous systems that have on-board reasoning. How are we going to think about certifying them in general?

The real issue here, not just with driverless cars, is that we have an administration that doesn’t like regulation, so we’re forced to work within the framework that we’ve got. Right now, NHTSA does have the authority to mandate testing and other interventions, but they’re not doing it. They don’t have any people on the staff that would understand how to set this up. There’s just a real lack of qualified artificial intelligence professionals working in and around the government. This is actually why I’m a big fan of public-private partnerships to bring these organizations together – let NHTSA kind of quarterback the situation but let the companies get in there with other experts and start solving some of these problems themselves.

 

Dr. Mary “Missy” Cummings  is a professor in the Department of Mechanical Engineering and Materials Science at Duke University, and is the director of the Humans and Autonomy Laboratory and Duke Robotics. Her research interests include human-unmanned vehicle interaction, human-autonomous system collaboration, human-systems engineering, public policy implications of unmanned vehicles, and the ethical and social impact of technology.

Professor Cummings received her B.S. in Mathematics from the US Naval Academy in 1988, her M.S. in Space Systems Engineering from the Naval Postgraduate School in 1994, and her Ph.D. in Systems Engineering from the University of Virginia in 2004.  Professor Cummings as a naval officer and military pilot from 1988-1999, she was one of the Navy’s first female fighter pilots.