The widespread introduction of autonomous vehicles could potentially bring about many benefits – advocates argue they will reduce traffic, the burden of driving, and emissions should the cars be electrified. They could also improve access for children, the elderly or people with disabilities – but the most important benefit is improved safety.
U.S. road fatalities increased 5.6 percent from 2015 – 2016. This is a disturbing trend, as this is the largest increase in the last decade. Proponents of the self-driving community will tell you that the cars will help to slash the numbers significantly because the human driver is taken out of the equation. According to the National Highway Traffic Safety Administration, there were 5,987 pedestrian fatalities in 2016 – the highest number since 1990 – and 847 bicyclist fatalities, the highest since 1991. In addition, fatalities due to drunk driving and speeding went up at least 1 percent. Although fatalities from distractions and drowsiness went down 2.2 and 3.5 percent, respectively, they were offset by an increase in other reckless behaviors (speeding increased 4 percent, alcohol impairment increased 1.7 percent, and unbelted incidents increased 4.6 percent).
Autonomous vehicles are being tested in several states and provinces, such as California, Pennsylvania, and Ontario. The graphic below shows the status of autonomous vehicle testing laws in the various states across the country – 25 of 50 states have passed laws overseeing testing. Uber and Waymo have taken the lead in testing – Waymo has logged over 5 million miles and Uber, although far behind Waymo, has logged a significant number of miles itself with 2 million. California has been working with testing companies under a regulatory framework, while states like Arizona have allowed free reign to the companies to test the vehicles on the public roads, with a backup human in the driver seat to compensate for any failures in the software. However, what happens if the driver gets distracted and loses focus? Or when the autonomous system doesn’t have a sufficient way of warning the driver that they need to take over?
The NTSB presents its findings
According to a preliminary report released by the National Transportation Safety Board (NTSB), that is exactly what happened when an Uber self-driving platform controlling a Volvo XC90 autonomous vehicle killed a bicyclist in Tempe, Arizona on March 18. The initial reaction of the chief of the Tempe police on March 19 was that Uber was likely ‘not at fault’ for the incident after viewing the vehicles own video of the event. After a more thorough investigation, however, the NTSB report states that the Uber system “registered…observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path.” The Volvo XC90 had its own emergency braking system, but this system was disabled when the Uber self-driving system was controlling the vehicle, to “reduce the potential for erratic behavior.” The Volvo emergency braking system could have prevented or reduced the severity of the crash, since it detected the bicyclist 1.3 seconds before the collision, and if enabled would have made an emergency stop. The driver appeared to have been distracted by the computer interface and did not see the bicyclist step out into the street. By the time the driver looked up, saw the bicyclist and pressed the brake, it was too late.
Safety advocates across the spectrum have cautioned lawmakers about the rapid pace of testing saying that it is too soon to have them tested on public roadways, interacting with pedestrians and bicyclists. Moreover, reports suggest that Uber’s self-driving system was struggling to navigate public streets, with drivers needing to intervene and take control from the automated system once every 13 miles, compared to more than 5000 miles between interventions for the Waymo systems being tested in California. Real world testing on public roads is clearly needed to test and improve the self-driving technology but testing on public roads must only be done once public safety can be assured.
Congress is pushing federal legislation too quickly
This fatal crash is a stark reminder of the risks involved in racing to bring automated driving technology to market without adequate oversight. Senator John Thune, the Republican Chairman of the Senate Committee on Commerce, Science, and Transportation, remarked that “the [tragedy underscores the need for Congress to] update rules, direct manufacturers to address safety requirements, and enhance technical expertise of regulators.” Senator Gary Peters also chimed in, saying that “Congress must move quickly to enhance oversight of self-driving vehicles by updating federal safety rules and ensuring regulators have the right tools and resources to oversee the safe testing and deployment of these emerging technologies.”
Yet while state and local governments grapple with responses to this tragedy, the architects of the Senate self-driving bill are renewing their push to get it passed through Congress. The Detroit News reported that Peters and Thune are still attempting to win support from reluctant senators. The bipartisan duo also is looking at the possibility of trying to attach the measure to another bill that has better prospects for a full vote or passing it as a standalone bill.
This push concerns us as we question whether the AV START Act is the right vehicle to meet those aims. The bill would allow hundreds of thousands more autonomous vehicles on our roads, with lax oversight, and would pre-empt the great work that state and local governments are doing to regulate AV testing in their jurisdictions.
Safety of all users of the road must be the top priority
In our policy brief “Maximizing the Benefits of Self Driving Vehicles,” UCS advocates that “rigorous testing and regulatory oversight of vehicle programming are essential to ensure that self-driving vehicles protect both their occupants and those outside the vehicle.” In October 2017, UCS expressed its concerns on the lack of scientifically-based safeguards in the Senate’s AV START Bill. Already, cities and states are having discussions on how to regulate AVs more strictly. The mayor of Pittsburgh Bill Peduto planned to ask representatives from the AV industry agree to a 25-mph limit on city roads, stating “Pittsburgh should have a very strong voice in whatever Pennsylvania should decide to do,” Peduto told reporters Tuesday. “These are our streets. They belong to the people of the city of Pittsburgh and the people of the city of Pittsburgh should be able to have certain criteria that shows them that safety is being taken first.” However, the city has limited authority to regulate vehicles on its streets. California is taking a different tack, as its Public Utilities Commission recently released guidelines that will allow AVs to pick up passengers – as long as the company holds an autonomous vehicle testing permit from the DMV for at least 90 days before picking up passengers, agrees to not charge for the ride, and files regular reports including the number of miles their self-driving vehicles travel, rides they complete and disabled passengers they are serving.
Uber and other companies will have to reassess their procedures for AV road testing and states will have to re-evaluate how freely they allow self-driving cars to be tested on their roads. Furthermore, municipal governments need to be at the table working with companies to develop robust safety standards. We need to ensure at all levels of government that adequate, sound safeguards are implemented, so that autonomous vehicles can truly achieve the safety benefits they are expected to have.
Support from UCS members make work like this possible. Will you join us? Help UCS advance independent science for a healthy environment and a safer world.