Skip to content
Podcast

Podcast: Let’s Talk About AV Liability 

William Widen, a professor of law, and Phil Koopman, an engineering professor, are proposing a new legal framework that rescues automated-vehicle (AV) accidents from the Black Hole of Liability.
William Widen and Phil Koopman
William Widen (Left) and Phil Koopman

Share This Post:

By Junko Yoshida

Guests:

  • William Widen, Professor of Law at the University of Miami
  • Phil Koopman, Associate Professor, Department of Electrical and Computer Engineering at Carnegie Mellon University

Let’s cut to the chase. Who’s accountable when a highly automated vehicle gets into a crash?

To such a seemingly simple question, there should be a straightforward answer.  But today, there isn’t.  With the absence of statutes that address this dilemma, the fundamental definition of who’s the driver in a self-driving car widely varies from state to state”.: Is it the computer, or the owner, or the person who turned on the “Go” button?

Or is it, God help us, Elon Musk? 

None of these choices, frankly speaking, makes sense. The computer has no assets to pay for the damage. And what could be accomplished by throwing the computer into jail? The owner of a self-driving car is not a manufacturer. That means the owner doesn’t have the ability to control the self-driving car’s misbehavior, let alone take corrective action after the crash.

State laws today don’t give clarity as to who is the responsible party – other than the machine – when self-driving cars commit negligent homicide. This remains an open question.

In this week’s podcast, Bill Widen, a law professor, and Phil Koopman, a professor of engineering who has dealt with product defect lawsuits, weigh in on the topic.

Below, we explain a bit of background on how Widen and Koopman came to develop a new legal framework on AV liability. You can read it before or after listening to the podcast.

Koopman explains: “Assume you’re the victim or a member of the family who didn’t survive, and you want to get justice. Your first step is to argue that the current law makes no sense. You bring it all the way to the state supreme court to get the law ruled unconstitutional … And only then can you get to compensation.” This arduous process puts a huge burden on plaintiffs. Worse, it deters and dissuades victims from obtaining appropriate compensation, Koopman added.

To untangle what seems a complex web of liabilities, Widen and Koopman are proposing a simpler, common-sense approach to the problem of AV accountability and liability.

They suggest: When robotaxi companies create a computer driver that replaces a human, why not just treat the human and computer driver the same before the law? 

They explained:

For example, the legal system finds a driver at fault when he or she runs a red light. A computer that ignores the light should face identical legal consequences without any technical proof. The human might claim that she didn’t notice the red light because she was talking to a passenger. But her excuse for the traffic violation does not mitigate the fact that she’s at fault.

The same applies, logically, to a computer driver. Whatever techno-mystery caused it to break the law should not absolve the machine from fault.

Given robotaxi companies’ claim that their self-driving car is dramatically safer than a human driver, Widen noted, “then, on the off chance that the robotaxi causes an accident, the fair result is that you pay for it. Full stop.”

Widen and Koopman, in a recent technical paper, submit a proposal that holds computer drivers accountable by requiring them to perform as the law expects of a reasonable human driver.

They are advocating new state laws that create a legal fiction, the  “Computer Driver,” which would make it clear that “a computer driver has a duty of care to the other road users.” 

By bringing clarity to state law, it becomes possible for a plaintiff to make an ordinary negligence claim against the manufacturers of self-driving cars.  

In another technical article, Widen and Koopman discuss “liability attribution rules when humans and computers share driving responsibilities.”

Let’s say the Computer Driver occupies an Operational Design Domain (ODD), and it asks a human driver to take over steering. It is reasonable to expect that the human driver needs a reasonable amount of time to reassume control. Rather than wasting a court’s time arguing over how quick the human’s reflexes should be – three seconds? or five, eight, ten seconds? – Widen and Koopman suggest a floor of ten seconds.

Widen said, “Let’s just have a blackout period where the human doesn’t have contributory negligence for the first 10 seconds of the handoff. And after that, we’ll determine who should have done what, based on the facts and circumstances exactly the way we do it in a tort case now.”

As Koopman noted, “That gets rid of cases where the automated driving system turns off one second before crash and the manufacturer says, ‘Well, the human was in charge at the time of the crash.’”

Widen and Koopman offer similar insightful comments in the course of answering the question you always wanted to pose but were afraid to ask.

Listen to our podcast:


Junko Yoshida is the editor in chief of The Ojo-Yoshida Report. She can be reached at junko@ojoyoshidareport.com.

Related stories:

Copyright permission/reprint service of a full Ojo-Yoshida Report story is available for promotional use on your website, marketing materials and social media promotions. Please send us an email at talktous@ojoyoshidareport.com for details.

Share This Post: