A Waymo self-driving car killed a dog in ‘unavoidable’ accident
A Waymo robotaxi operating in autonomous mode struck and killed a small dog last month in San Francisco, according to an incident report filed with the California Department of Motor Vehicles. The incident appears to have been unavoidable, based on information provided in the report.
Any collision involving an autonomous vehicle — even one in which the company is not at fault — can cause a backlash, particularly in a city like San Francisco, where there is already tension between city officials, AV tech companies and the public. If technological capability and a favorable regulatory environment are two legs of a solid AV commercialization stool, public perception is the crucial third. And a self-driving car killing a sweet pooch has the potential to kick out that third leg.
It also comes at a critical time for the industry and, specifically, for Waymo, which is expanding its robotaxi network in Phoenix and gearing up to charge fares for fully autonomous (meaning no human behind the wheel) rides in San Francisco.
According to the report, one of Waymo’s self-driving Jaguar I-Pace cars was traveling on Toland Street, a low-speed street near Waymo’s depot, and the dog ran into the street. The vehicle was operating in autonomous mode, and a human safety operator was in the driver’s seat at the time of the accident.
The human operator didn’t see the dog, but the vehicle’s autonomous system did. However, a number of factors, including the speed and trajectory of the dog’s path, made the collision unavoidable, according to Waymo.
In response to our questions about the incident, Waymo sent TechCrunch the following statement:
On May 21 in San Francisco, a small dog ran in front of one of our vehicles with an autonomous specialist present in the driver’s seat, and, unfortunately, contact was made. The investigation is ongoing, however the initial review confirmed that the system correctly identified the dog which ran out from behind a parked vehicle but was not able to avoid contact. We send our sincere condolences to the dog’s owner. The trust and safety of the communities we are in is the most important thing to us and we’re continuing to look into this on our end.
Neither the safety operator nor the autonomous system braked to avoid collision, according to Waymo. In both cases, that’s because of the “unusual path” the dog took at “a high rate of speed directly towards the side of the vehicle,” said a Waymo spokesperson.
One of the ways Waymo evaluates its autonomous driver’s collision avoidance performance is by comparing it to that of a model for a non-impaired, with eyes always on the conflict (NIEON) human driver. A Waymo spokesperson told TechCrunch that the company reconstructed last month’s event in simulation against the NIEON model, but the analysis showed a collision in this case was unavoidable.
Sagar Behere, VP of safety at AV verification and validation startup Foretellix, told TechCrunch that timing is a key factor in an AV’s ability to avoid collision. (Behere spoke to TechCrunch about AV technology generally, and not about Waymo specifically.)
“If you saw the object, when did you see it? Did you see it in time to be able to act on it and make a good evasive maneuver?” said Behere. “Or maybe you saw it and predicted it would move in a way that required you to take no action? Or maybe you were about to take action, but then the object changed course.”
Despite Waymo’s potential for plausible deniability here, the company and the industry at large are still at risk of a downturn in public perception, which could make AV expansion plans more difficult.
Recent studies show that public perception around autonomous vehicles is improving, albeit slowly and mainly in regard to lower levels of automated driving like advanced driver assistance systems. People trust advanced driver assistance systems in today’s new vehicles (systems that require a human to be ready to take control) over fully autonomous vehicles when it comes to crash prevention, according to a study from the AAA’s Foundation for Traffic Safety, which collected responses from 2018 to 2020
Another study that collected nearly 6,000 responses between February and June 2022 found that trust in AVs decreases after a crash, while levels of concern increase. However, those with prior knowledge of AVs are more optimistic toward the technology, even after a crash, because they generally accept that AVs will not always make the right decisions at early stages of adoption.
Public perception aside, Waymo could face investigations from regulatory bodies like the National Highway Traffic Association. NHTSA requires manufacturers and operators of high-level autonomous vehicles to submit incident reports for crashes if the autonomous driving system was in use at any time within 30 seconds of the crash and the crash resulted in property damage or injury. The agency told TechCrunch it had reached out to Waymo for more information, but has no open investigations into the company at this time.
In 2018, when an autonomous vehicle from Uber’s now-shuttered AV unit hit and killed a pedestrian, the National Transportation Safety Board (NTSB) launched an investigation. Usually, the NTSB launches a highway investigation when there’s been a significant crash that highlights a potential national safety issue. A spokesperson from the agency told TechCrunch she doesn’t believe NTSB has any current investigations involving Waymo.
This article has been updated with NHTSA’s statement.