BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

What Google Cars Can Learn From Killer Robots

Following
This article is more than 9 years old.

Robot cars and military robots have more in common than you’d think.  Some accidents with self-driving cars will result in fatalities, and this may be troubling in ways that human-caused fatalities are not.  But is it really worse to be killed by a robot than by a drunk driver—or by a renegade soldier?

We can look at the “drone wars” for insight, to the benefit of the autonomous driving industry that includes Audi, Bosch, Daimler, Ford, GM, Google, Nissan, Tesla, Toyota, Volvo, and other heavyweights.

Accidental ethics

Accidents with self-driving cars will happen as a matter of physics and statistics.  Even with perfect software, many things can go wrong that will cause them to crash:

Sensors can be damaged, improperly maintained, or impaired by bad weather; tires can unexpectedly blowout; animals and pedestrians—hidden behind curves, grassy knolls, cars, and other solid objects that sensing technologies cannot see through—can dart out in front of you; passing and incoming drivers can accidentally swerve into you; insurance-fraud criminals can deliberately crash into you; and you could be hemmed in between cars or against a cliff, leaving no escape path when a big rock falls in front of you or a distracted driver plows into you from behind.

When accidents happen, autonomous cars may have a range of options to reduce the harm—that is, “crash-optimization” programming.  Besides slamming on the brakes, they could also change directions to reduce the force of a collision, including a choice of what the car crashes into, such as avoiding a wall but striking another vehicle.  Some accidents are worse than others, and an intelligent self-driving car could choose the lesser evil.

But some of these decisions will look like judgment calls:  Is it better to crash into a small car or a large truck, or into a motorcyclist with a helmet or one without a helmet, or into a little girl or an elderly grandmother?  Should the car jealously protect its owner, or should it be first concerned with public safety?

Where value judgments like these are being made—to hit x instead of y—we need to think about ethics.  But the autonomous driving industry hasn’t engaged ethics much, at least publicly, even though many companies are interested in the issues.  Here’s why this silence can be a very expensive mistake:

Lessons from the drone wars

Because drones—or military robots, such as General Atomics’ MQ-1 Predator and MQ-9 Reaper models of unmanned aerial vehicles (UAVs)—are genetically related to robot cars, many of their ethical issues may be portable and apply to robot cars.  Military funding is behind both: the first successful autonomous car was Stanford University’s “Stanley”, an autonomous Volkswagen Touareg that was first to win the Defense Advanced Research Projects Agency (DARPA) Grand Challenge in 2005 by self-navigating a 7.32 mi (11.78 km) course in the Mojave Desert.

A MQ-9 Reaper unmanned aerial vehicle. (Photo credit: Wikipedia)

A key lesson from the drone wars is about the defense industry’s failure to engage in the ethics debate.  In late 2009, the Association for Unmanned Vehicle Systems International (AUVSI), the industry’s leading advocacy group, presented a survey of the top-25 stakeholders in the robotics field.  An astonishing 60% of these industry leaders reportedly did not expect “any social, ethical, or moral problems” to arise from the use of drones.  From either that disbelief or disregard of ethical problems, the US defense community had largely been silent about the ethics and legality of drone warfare, even though a positive case could possibly be made for it.

That vacuum was quickly filled, as most are.  By then, academic papers, reports, and best-selling books have started to raise awareness of those problems.  In recent years, activist groups have principally framed that debate without much opposition.  Now, the US and other defense communities are suffering “blowback” or a strong negative public reaction against the technology, as anti-drone campaigners are making progress with international policy communities to regulate or outright ban the weapons.

The current blowback to defense communities suggests that the autonomous-car industry can pay now or pay later, but it eventually will need to pay attention to ethics.  It tends to be better to deal with such issues proactively, rather than defensively and reactively—that is, too late.  Much of this blowback might have been abated, had the community acknowledged and addressed the issues at an earlier stage or at least been prepared to thoughtfully respond to criticisms when they surfaced.

Are killing machines unethical?

Some questions about military robots can apply to autonomous cars.  They include:  Is it unethical for a machine to kill?  Can ethics be reduced to algorithms?  Should a machine ever refuse a human order?  Who is responsibility for an accident or unlawful action?  What are the possible abuses, such as easier assassinations and mission creep?  Should a robot be capable of self-defense?  What are the risks with technology dependency, such as loss of skills?

As it relates to robot cars, I’ll focus on only one linchpin issue—the first question above—whether it's wrong to give a machine the ability to make life-and-death decisions.  For military systems, some believe that “the kill decision shouldn’t belong to a robot.”  But why exactly?

The Terminator. (Photo credit: Wikipedia)

One explanation is this (extending and generalizing actual arguments made in the debate):  Even if a robotic decision to target a particular person is legal and identical to that decision made by a human, the machine’s actions are divorced from meaning, since the machine is incapable of “understanding” in the usual sense; and a decision that leads to harm or death of a human being is a most serious one that morally demands careful thought and meaning.  What little humanity remains in modern warfare would be eroded by uncaring lethal autonomous machines—removing the last safeguard against an illegal or inhumane kill-decision, especially since these technologies are imperfect still and perhaps always.

Of course, it’s true that human soldiers often get it wrong too.  But for potentially life-changing or life-ending judgments, it’s not ridiculous to want the possibility of mercy, understanding, compassion, etc.—something that only a human can provide, it seems—between you and a bad decision.  And in the event of a bad decision, at least we have much clearer accountability, whereas blame could be passed along any number of links in the supply chain for designing, building, deploying, and operating a military robot.

Even if all robotic kill-decisions are lawful, it still seems better to have the possibility of mercy and compassion, however irrational and incomprehensible those human features may be.  With a robot behind the trigger, it’s difficult to imagine how symbolic cease-fires (such as the Christmas truce in WW1), acts of chivalry (such as the Charlie Brown incident in WW2), and other such events in wartime would be possible.   Yes, these events may be uncommon, but they are profoundly meaningful in getting combatants to acknowledge the humanity of their enemies—helping to make war less terrible, or to facilitate an end to the war, or at least to form a stronger basis for lasting peace.

Or so the argument might go.

What do robot cars have to do with this?

For some unavoidable accidents, the crash-optimization decisions of a robot car resemble targeting decisions—not too unlike kill-decisions that military robots are imagined to make in some dystopian future.

Say that your robot car, boxed in highway traffic with cars on all sides, had to make a decision to swerve left and hit a small Mini Cooper, or to swerve right and hit a large Volvo SUV.  (Imagine that there’s no time to brake for the large object that has just fallen off the truck in front of you, and anyway that would result in a serious accident with the big-rig truck tailgating you.)  How should the car be programmed to react?

Either decision to swerve left or right could be defended.  Crashing into the Mini Cooper would be safer for you, though not for the people in the smaller vehicle.  On the other hand, crashing into the SUV wouldn’t be as bad for those occupants, given its larger mass and safety features typical of a Volvo, though it would be worse for you to collide with a bigger vehicle.  But no matter which path is chosen, the car’s programming steers it toward a particular target, preferring one collision over another.

In this sense, the autonomous car acts like a lethal military robot: it makes a decision that causes harm or death to fall on some human being.  And this opens up the autonomous car to the same kind of criticism above that’s currently directed only at military robots—that it’s inhumane to allow a machine to make a “kill decision.”

But wait, does it matter that the robot car (or its programmer) doesn’t intend to harm its target—which it merely foresees or anticipates—whereas the military robot does intend harm?  Maybe, or maybe not.  It’s not so much the intention to kill during wartime that’s problematic: we already allow that human soldiers may intend to kill in war, whether with their bare hands or with tools.  But the problem is more about the ethics of giving certain tools—autonomous machines—the last say in making the deeply profound decision to act in ways that foreseeably results in the death of a fellow human being, such as to shoot at or crash into one.

So, putting a machine’s “intention” aside (if they are even capable of having intentions), we can still resent not having that firewall between us and a mechanical decision to harm us.  It’s not unreasonable to want that chance of better judgment, mercy, compassion, and so on that we have with only humans—even if humans aren’t the most reliable safeguard against a bad decision to begin with.  This is to say that critics can still object to death-by-robot, even if the car does not intend or is not designed to kill.  The fact that a machine can make a decision that foreseeably leads to harm or death is problem enough.

Now, how often that chance would come into play, including how many lives could be saved by robot cars, is relevant to whether the loss of this firewall is worth it.  I’m not addressing that larger question here, but only identifying one of many presumptive objections that might be raised against autonomous cars.  These objections ultimately may or may not be serious, but we won’t know until we consider them; and the military-drone debate and others in robot ethics can give us a running start.

Stanford's autonomous Audi. (Photo credit: Steve Jurvetson)

Why care about ethics?

Though there are many reasons to think about ethics, the most convincing for industry is that ethics can promote self-interest.  Engaging ethics can help to anticipate and avoid potholes that could slow or stall business, as the defense community is learning about its drones and robots.  Further, if ethics is ignored and the robotic car behaves badly, a powerful case could be made that auto manufacturers were negligent in the design of their product, and that opens them up to tremendous legal liability.

Ethics is also vital as a general compass for navigating uncharted roads in law and regulation.  Nearly all laws and regulations today do not contemplate a non-human driver, so it is unclear how they would account for self-driving technologies.  When laws conflict with the realities of the technology—such as laws that require one or both hands on the wheel, or that forbid driving while impaired or distracted (i.e., laws that seem to be moot with autonomous driving)—it is useful to return to “first principles” in ethics and philosophy to understand what the law ought to be or how it should be interpreted.

Another reason for industry to monitor ethical debates is this:  If restrictions come to pass for the military robots, they could unintentionally cross over to the civilian realm and affect the autonomous driving industry and other robotics.  For instance, if the rationale above—that it’s wrong to allow machines to make decisions that directly injures people—is what underwrites a ban or restrictions on military robots, then the same rationale could be used against robot cars, unless we explain what the relevant difference or disanalogy is.

Any industry would do well to follow this advice: “Those who cannot remember the past are condemned to repeat it.”  Today, we see activists campaigning against “killer” military robots that don’t yet exist, partly on the grounds that machines should never be empowered to make life-and-death decisions.  It’s not outside the realm of possibility to think that the autonomous car industry could suffer the same precautionary backlash, if industry doesn't appear to be taking ethics seriously.

****

Acknowledgements:  Some of this research is supported by California Polytechnic State University, San Luis Obispo; Center for Automotive Research at Stanford (CARS); Daimler and Benz Foundation; and Office of Naval Research.  The statements expressed here are the author’s alone and do not necessarily reflect the views of the aforementioned persons or organizations.