reading review
Bryant Walker Smith, a professor at the University of South Carolina, is an internationally recognized expert on the law of self- driving vehicles and taught the first-ever course on the topic.
Autonomous driving and the law: Who's responsible when there's a machine at the wheel? Julie Halpert Automotive News | August 24, 2015 - 12:01 am EST
Drivers cause accidents, accidents cause injuries and injuries cause lawsuits.
But if the advent of autonomous vehicles means cars increasingly control themselves, who is liable when something goes bump?
That's a question for judges and lawyers, and a topic of widespread academic and policy interest as automakers ramp up plans for self-driving cars. The federal government is sponsoring legal research in the academic realm, and two symposiums in recent years -- one in 2012 in California and another last fall in Minnesota -- have focused on the legal implications of automated driving.
Bryant Walker Smith, a professor at the University of South Carolina, is an internationally recognized expert on the law of self-driving vehicles and taught the first-ever course on the topic. He's part of a new community of lawyers working on how to determine who's responsible for accidents involving self-driving vehicles.
"Managing this transition will be very complicated and will, I think, be a source of litigation," Smith said.
In a telephone interview with Automotive News correspondent Julie Halpert, he shared his perspective on a few scenarios that could unfold as autonomous vehicles become more mainstream:
Scenario 1: An autonomous car brakes quickly to avoid a collision with a bicyclist. The passenger isn't wearing his seat belt. He's thrown from the seat into the windshield and is injured. He sues the automaker for damages.
There is not a nationwide set of rules for when you can successfully sue someone for an injury they've caused you, and states vary considerably in the rules they apply.
Many states take the position that manufacturers cannot introduce evidence of an injured person's failure to wear their seat belt as a reason
for the injuries. Some [automotive] developers are trying to limit liability and also make systems safer by not allowing the car to start if people are not belted. The unbelted user who is thrown from the vehicle will, in most states, need to argue what the manufacturer should have done differently to protect them. And then it is an argument in court over whether that would have been a reasonable design alternative.
The manufacturer could argue that they're interested in protecting user autonomy and that they're under no obligation to impose ignition interlock. They could also argue that there are situations where a user may need to operate a car with the seat belt disengaged.
The plaintiff could also argue that the vehicle should have known that the occupant wasn't belted and therefore should have decelerated more slowly, to mitigate the extent of injury. If the vehicle slowed to avoid the bicyclist, then the driver would be in the difficult position of arguing that the vehicle should have [prioritized] the driver's safety over the vulnerable road user's. The manufacturer could argue that seriously injuring or killing a bicyclist to avoid having the occupant go through the windshield is not an appropriate trade-off.
Scenario 2: A driver perceives an imminent collision, and takes control of the wheel. The car crashes anyway, injuring the driver. The driver sues the carmaker for damages.
A question to ask is, if the human had not taken the steering wheel, would the automated system have crashed? If so, it really doesn't matter that the human intervened, because the crash was inevitable. Then you ask what caused the crash: Was it a failure of the automated system or one wholly beyond the capability of the automated system or a human to manage?
If the automated system had been able to prevent the crash, but the human driver by virtue of re-engaging ended up crashing the vehicle, then it would be difficult to argue that the manufacturer was liable. You'll see arguments about human fallibility vs. human autonomy, with manufacturers saying it's important to allow the human to always exercise his authority.
The argument from the driver will be that the automated system should have intervened earlier or should have intervened more effectively. The manufacturer will say that, since it's an automated emergency intervention system, these are supporting systems and because they are not perfect, because their goal is to give maximum flexibility to the human driver, intervening earlier would be counterproductive.
Scenario 3: An automated vehicle accelerates through a construction zone, injuring a construction worker. That worker sues the automaker.
Automated vehicles in the future are going to be held to the higher of two different standards. One is if the vehicle did not perform as well as we would expect of a reasonable human driver.
The second is, even if the vehicle performed far better than we'd expect of a reasonable human driver, [could] the vehicle have performed even more safely? Then the manufacturer will be liable for injuries.
We would expect the driver to be very careful around a construction zone. Because the vehicle didn't perform as we would expect a reasonable human [to perform], the company might be liable.
For any viable case, there needs to be an injury, and the person they sue needs to have at least partly caused that injury through some kind of action that was negligent or that involved a defect.
The plaintiff would have the burden of demonstrating the defect. They could do that by saying there was a specific error in the company's code that should have been changed or there was a specific sensor that should have been added. That would be pointing to a better design that the manufacturer should have incorporated.
The plaintiff might also argue that even if he can't point to that reasonable alternative design, and even if the vehicle was as safe as an automated vehicle could be expected to be in this situation, nonetheless it did not perform as safely as a reasonable human would, and on that basis alone, the manufacturer should be liable.
There's this added burden of the plaintiff to show the danger was foreseeable and that there was a reasonable alternative design that could have prevented the injury without reducing the safety of the product.
The manufacturer could argue that there was no technical way to have prevented this, that the data and sensors were fine, but that this was a wholly unforeseeable fluke that just happened -- a bizarre confluence of the sun, the cone, the weather and the striping on the road. It was not the result of a deficient product.
PRINTED FROM: http://www.autonews.com/apps/pbcs.dll/article?AID=/20150824/OEM06/308249984&template=printart
Entire contents © 2015 Crain Communications, Inc.