Uber’s Deadly Self-Driving Accident: What We Know So Far

Uber’s Deadly Self-Driving Accident: What We Know So Far

There’s no question on Sunday night, an Uber test vehicle, a 2017 Volvo XC90, while driving in autonomous mode and with a safety driver behind the wheel, struck and killed a pedestrian in Tempe, Ariz. Beyond those facts, though, there is a lot of speculation about how it happened and who or what was at fault. The full investigation will take quite a bit of time, but we already know a lot more than we did yesterday, and can focus a little more clearly on the facts and what real issues they highlight.

About the Accident

We now know that the 49-year-old woman pedestrian, Elaine Herzberg, was pushing a bicycle loaded with packages. She was stepping away from a center median into the lanes of traffic, fairly far from a crosswalk at 10pm at night. Somewhat strangely, there’s an inviting brick pathway on the median where she crossed, but it’s paired with a sign warning pedestrians not to use it.

The car was in autonomous mode, driving 38mph in a 35mph zone. According to police, it appears neither the car’s safety systems nor the safety driver made an attempt to brake — with the driver being quoted as saying the collision was “like a flash” with their “first alert to the collision [being] the sound of the collision.”

Credit: ABC 15
Credit: ABC 15

As always, there are reasons to be skeptical about the claims of anyone involved in an accident. But Tempe police chief Sylvia Moir, tentatively backed up the driver’s view of events after viewing video captured by the car’s front-facing camera. She told the San Francisco Chronicle, “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway.”

Moir went on to speculate that “…preliminarily it appears that the Uber would likely not be at fault in this accident…” However, she hedged with the additional comment, “I won’t rule out the potential to file charges against the (back-up driver) in the Uber vehicle.” The next day, the Tempe Police Department walked those statements back somewhat, clarifying it’s not their role to determine fault in vehicular accidents. Unfortunately, that leaves things as clear as mud. So all we can do on that score is await public release of video from the car’s cameras and the results of the NTSB investigation.

How Effective Are Safety Drivers?

The test car also has a second camera that records the driver. One of the most important lessons about the effectiveness of safety drivers might be watching the time-synced video from the front-facing camera and the driver-facing camera to see how the event unfolded from his perspective and how she responded. However that turns out, I’m sure it will raise additional questions about safety driver training and alertness after long periods of inactivity. In terms of ruling out possible causes, the Tempe police have said that the driver showed no signs of impairment.

While it might have nothing to do with the accident, it won’t help Uber’s cause that the 44-year-old safety driver, Rafaela Vasquez, has a prior felony conviction. Uber has been in trouble before for illegally employing felons as drivers in Colorado, although it isn’t clear whether any regulations were violated in this case.

It’s Not the Trolley Problem

It’s really popular to bring up the hypothetical “trolley problem” when discussing self-driving vehicles. In short, it questions whether a driver — human or computer — would, or should, plow into a crowd or deliberately swerve away at the cost of killing someone else. To paraphrase a recent ad, “That Is Not How It Works.” Sure, eventually we’ll have AI systems that think at that level, but not soon. Currently the systems that drive these cars, or land our planes, or manage our trains, are much more low-level than that.

Whether you are a human or a self-driving car, there isn’t any way to ‘win’ the Trolley Problem. You either kill a crowd or deliberately change course to kill a single person.
Whether you are a human or a self-driving car, there isn’t any way to ‘win’ the Trolley Problem. You either kill a crowd or deliberately change course to kill a single person.

Today’s systems are designed to react to their environment, and avoid hitting things. Best case, they “know” enough to hit a trash can instead of a pedestrian, but they are not counting passengers in each vehicle or weighing deep ethical considerations. In this case, the Volvo was equipped with one of the most-modern, and most-touted, safety systems, including automatic emergency braking. It’s very important we understand why the emergency systems apparently failed to brake in this case. Whether it was an issue with the sensors, the logic, or the response time needed to activate the brakes, there’s clearly room for improvement in the system. Hopefully the relevant data will be made public for the benefit of everyone in the industry.

What Kind of Driver Do We Want Our AIs To Be?

Faced with a similar situation, Waymo’s cars will often slow to a stop and wait for a bicyclist to make a decision about whether to cross in front of them — even when the cyclist has their arms crossed across their chest. My colleague Bill Howard reports similar behavior from the self-driving cars he has demoed. This can be a bit silly by human-driver standards, and is annoying to the cars lined up behind the stopped vehicle, but it is a good way to make sure nothing bad happens. In similar situations, human drivers certainly take more risks, and in many cases accidents result. We accept that as part of the risks of roads and cars.

But when it is a computer-controlled car, we expect it to be perfect. So it is either going to be like the current Waymo cars and clog roadways by being super-cautious, run the risk of being involved in accidents, or figure out a new approach to safe driving. Realistically, we need to decide as a society how much risk we are willing to bear. If it’s okay for computer-controlled cars to simply be safer on average than human-driven vehicles, we’re closing in on success in many conditions. After all, computers don’t get tired, don’t drink or look at their cellphones, and typically have the ability to see in the dark. However, the street-based self-driving demos at CES had to be canceled on the day there was heavy rain, so there are still plenty of limitations.

If we expect self-driving cars to be perfect, we’re in for a long wait. At a minimum, they will need to be able to see and interpret body the language and facial gestures of pedestrians, cyclists, and other motorists. Friends in the autonomous vehicle industry postulate that fully autonomous vehicles will need to be 10 times safer than human-driven cars to be successful and broadly allowed on public roads.

Towards More Intelligent Regulation

There are already some rules about what a company needs to do to field self-driving vehicles with a safety driver, and in some cases like elsewhere in Arizona a set of rules for vehicles with no human driver at all. But these rules were made up without much data, at least on the part of regulators. As we get more experience with what can go wrong we’ll hopefully get better, and better-targeted, regulations specifying the requirements to road test vehicles with drivers, and ultimately without drivers. For example, as part of luring autonomous vehicle research and testing to the state, Arizona has a particularly vendor-friendly set of rules, that don’t require public disclosure of disengagements (times when the human has to take over the vehicle). By contrast, California requires a report on them annually.

However the investigation turns out, this accident is placing additional scrutiny on the way self-driving vehicles are being developed and tested
However the investigation turns out, this accident is placing additional scrutiny on the way self-driving vehicles are being developed and tested

Driving Is About More Than Engineering

The more I study the complexities of building an autonomous vehicle, the more amazed I am that we don’t all die on the roads already. Between poor eyesight, slow reflexes, inhospitable conditions, and plenty of distractions, on paper it seems like human-driven cars should run into each other quite a lot. It’s not that we don’t have plenty of accidents, but overall there is about 1 fatality per 100 million miles driven. The fact that we don’t crash more often is a tribute to some of the facets of human intelligence that we don’t understand very well, and haven’t yet been programmed into autonomous vehicles. Just as importantly, many of our roads would be unusable if every driver adhered to the letter of every law. It is going to take more than just better machine learning algorithms and sensors before we have an effective system that allows self-driving and human-driven cars to share the roads with each other as well as pedestrians and cyclists.