On March 18, 2018, a vehicle operating in a “self-driving mode” (owned by Uber) struck and killed a woman in Tempe, Arizona, as she was crossing the street. This was the first documented time that an individual was killed by a self-driving vehicle. The vehicle in question was manufactured by Volvo but was owned and retrofitted by Uber with a self-driving mechanism. The vehicle did not “see” the woman crossing the street, did not slow down, and struck her.

Recognizing that we need more information, this incident begs the question: whose fault is it?

1. Volvo (manufacturer)?

Perhaps you would argue that Volvo should hold some liability for this accident. The question that needs to be answered is whether Volvo could have foreseen that Uber was intending to retrofit its vehicle with a self-driving device and whether Volvo had any control over the implementation / design / programming of the device. This would require an investigation into the ongoing relationship between Volvo and Uber.

This argument may prove to be quite dangerous. It would be shocking to hold a car manufacturer responsible for a regular car accident where a driver was negligent (absent a faulty product). However, is there a heightened standard of care that an auto manufacturer should have when they are engaging in research or development of a new product? Holding an auto manufacturer liable for the actions of a third party may be outrageous, but shouldn’t we expect a manufacturer to ensure that their product is being used for its intended purpose and if not, to step in?

2. Uber (owner / modifier)?

In this case, Uber seems like the obvious target. Or is it? Uber purchased a vehicle that was designed to be driven by a driver (with some assistive electronics like lane assist and adaptive cruise control). Uber took an active step to alter the vehicle and fit it with autonomous driving capabilities. Uber created a product and is liable for its malfunction.

But then the question arises: what level of autonomy did Uber program the vehicle to have? The US Department of Transportations National Highway Traffic Safety Administration (NHTSA) created a five-tiered ranking system to determine a vehicle’s level of automation. Perhaps the vehicle was equipped with a Level 2 system which required the driver to not only be able to assume control and responsibility for the vehicle but also be fully cognisant of the surrounding circumstances or it was a Level 3 system which only required a driver to be ready to assume control of the vehicle in an emergency situation. Or perhaps it was a Level 4 system where the vehicle assumes complete control over the vehicle and doesn’t require a driver’s input in most conditions.

Clearly, the level of liability fluctuates with the level of control that a manufacturer (or third party producer) gives to the vehicle. These levels need to be clearly communicated to a user to ensure that someone, or something, is responsible for controlling the vehicle at all times.

3. The Driver (operator)?

The classic target – humans. Manufacturers are typically not named in law suits stemming from a motor vehicle accident (save for a product liability claim), rather, the individual driver is typically held liable for an accident that they orchestrated. But what about the promise of complete safety on the road given by autonomous vehicle manufacturers? What is reasonably expected by an average driver of a vehicle that is claimed to be autonomous? Can a contract between a manufacturer and a driver predetermine liability? Why would consumers trust technology and voluntarily assume liability for an accident that a car was designed to prevent?

In order to determine the liability of a driver of an autonomous vehicle, courts will need to consider the above questions. Courts will have to determine what a “reasonable person” would expect from their autonomous vehicle, what agreement was entered into between the manufacturer and the driver, and of course, what were the circumstances of the loss.

4. The Victim?

Contributory negligence will remain a key component of most law suits where a victim contributed to their own injury. However, the court will be tasked with determining whether a potential victim was owed a higher duty, if one was owed at all, by a manufacturer of a self-driving vehicle – they did after all promise safer roads. This is an unprecedented consideration, but I supposed its fitting considering the topic.

Implications

Perhaps one of the biggest considerations is the effect that autonomous vehicles will have on the insurance industry. Underwriters will have to take a more active role in determining the capabilities of a vehicle that is being insured and more importantly, if those capabilities are in fact able to prevent potentially fatal accidents.

Adjusters will need to be cognisant of potential misrepresentations made by their clients in the course of obtaining an insurance policy as well as their ongoing duty to disclose material changes (i.e. the addition of a self-driving apparatus to their car).

Regardless of how you feel about Skynet, the tragedy in Arizona will soon shed light onto whether our society and legal system are prepared to accept and handle significant strides in technology. We will keep you updated as the story progresses.

Author

  • Stas Bodrov

    Once the target of an unsuccessful phishing scam, Stas is a key part of SBA’s cyber liability and privacy group providing services ranging from assessments and prevention to crisis response.