BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Why Can't Google Cars Avoid Rear-End Accidents?

Following
This article is more than 8 years old.

Google’s self-driving cars are making headlines as innocent victims in rear-end collisions. But this past week, Google revealed that one of these collisions had injured its employees inside the vehicle, causing minor whiplash. It could have been much worse. As often as rear-end crashes generally occur, why can’t robot cars just drive out of the way when they sense an incoming collision?

Well, they can, but there are unusual ethical and legal issues to think through first.

The technology exists for robot cars to detect impending collisions—whether with another vehicle, pedestrian, bicyclist, or other object—and to swerve out of the way if it can. So far, this feature is used for front collisions. In theory, it could also be used for rear-end collisions. But consider this problem:

A crash scenario

Your robotic car is stopped at an intersection and waits patiently for children crossing in front of you. Your car detects a pickup truck coming up from behind, about to cause a rear-end collision with you. The crash would only cause minor damage to the car and yourself—a bent fender and whiplash, probably— but certainly not death. To avoid this harm, your car is programmed to dash out of the way, if there's a safe path to take. In this case, your car can easily turn right at the intersection and avoid the accident. It follows this programming, but in doing so, it clears a path for the truck to continue through the intersection, killing a couple children and seriously injuring others.

Was this the correct way to program an autonomous car?  In most cases of an incoming rear-end collision, probably yes. But in this particular case, the design decision meant saving you from minor injury at the expense of serious injury and death of several children, and this hardly seems to be the right choice.

Arguably, you (or the car) may be responsible for their deaths: you (or the car) killed the children by removing an obstruction that prevented harm from falling upon them. It's the same responsibility if you had removed a shield that someone was holding in front of gunfire: you cause that person to die by taking away that shield, even if it's yours. And killing innocent people has legal and moral ramifications.

It might be that in the same situation today, in a human-driven car, you would also react to to save yourself from injury, just as the robot car did. The result might not change if a human made the on-the-spot decision.

But it is one thing to make this call in the panic of the moment, and another less forgivable thing for a programmer—far removed from the scene and a year or more in advance—to create a cost-function or algorithm that resulted in these unnecessary deaths. Either the programmer did so deliberately and despite the possibility; or she did it unintentionally, unaware that this was a possibility. If the former, this could be construed as premeditated homicide; and if the latter, gross negligence.

Either way is very bad for the programmer and an inherent risk in the business. As industry attempts to replicate human decision-making in driving, it takes on the responsibilities and liabilities of the human driver—new obligations it hasn’t had before.

A new kind of dilemma 

But, wait, this same legal and ethical risk exists with avoiding front collisions. So what’s different about rear-end collisions—why aren’t we seeing avoidance systems for those kind of crashes, especially as they’re common and can be fatal, too?

One difference is that, in rear-end collisions, the blame is usually on the rear car, which is presumed to be tailgating, driving too closely for road conditions, or not attentive enough to stop in time. So there’s no legal responsibility for the robot car in front to avoid a rear-end collision, especially if it were stopped and doing nothing wrong. In fact, it’s usually legally safer to not intervene in an accident in progress that you did not cause, in case you make it worse. But if a robot car were already moving and detected an object in its path, the car would have a much greater duty to avoid harm, since it had actively created this circumstance.

This problem is different from others in robot car ethics, which are generally versions of the trolley problem made famous by philosophers Philippa Foot and Judith Jarvis Thomson. Usually, the discussion is about confronting a choice of two evils, between striking one thing versus another thing. But in this case, the robot car makes a perfectly legal choice: to get out of the way of an accident, directly harming no one. Nonetheless, what’s legal here seems to be unethical, suggesting that programming a car to act lawfully might not be enough.

Here, we’re stress-testing a moral principle we all take for granted, that if you can easily avoid harm to yourself, then you should do it. In fact, it may be morally required that you save yourself when possible, for your own sake and those who care about or depend on you.

But, it turns out there are ethical nuances in even avoiding a crash (or “ducking harm” as it’s called in philosophy). Even if pedestrians aren’t involved, by getting out of the way of a crash, you’ve shifted the bad luck you were about to encounter to the car in front of you—something like the victims in The Ring who must copy a cursed videotape for others to watch (and potentially die in horrible ways), just to save themselves.

What now? 

These and other tricky problems don't mean we should stall the progress of autonomous cars, only that we need to anticipate and prepare for the potholes ahead. Sometimes, a robot car may be faced with a no-win scenario, putting the programmer in a difficult but all too real position.

To lessen this risk, industry may do well to set expectations not only with users but also with broader society. It should educate the public about the capabilities and limits of its robot cars. This responds to growing calls for algorithmic transparency, as computer programs we don't understand are increasingly in control of our wired lives, from financial trading, air traffic, social newsfeeds, and soon our roadways.

Being transparent about self-driving capabilities can help avoid alarming headlines and courtroom outrage if and when robot cars eventually cause a major accident. “Expectations are premeditated resentments,” an old saying goes. This means that industry can better prepare for these legal and ethical trials, if it is able to proactively explain the design and implications of their algorithms—to set informed, realistic expectations.

Accidents will still happen, and no technology is perfect. But being properly diligent—to be thoughtful about both law and ethics—goes a long ways toward forgiveness. Conversations with developers and outside experts, like this one at Stanford last month, are therefore important for the autonomous driving industry, since self-driving cars could be a major public good. But if we don’t even recognize there’s a problem, we can’t design a solution for it.

~~~

Acknowledgements: This work is supported by the US National Science Foundation, Daimler and Benz Foundation, and California Polytechnic State University, San Luis Obispo. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the aforementioned organizations.