author photo
By SecureWorld News Team
Tue | Oct 30, 2018 | 8:11 AM PDT

If an autonomous vehicle is out of options to avoid an accident, how should it proceed?

Should the self-driving car spare the lives of occupants by swerving onto the sidewalk and hitting pedestrians before it stops?

Or should it ram the autonomous car into the vehicle ahead, killing passengers but protecting those nearby pedestrians?

Does the answer change if there's a child involved, or a woman or a senior citizen, either inside the car or on the sidewalk? 

AI-autonomous-vehicle-decisions

New research suggests that humans answer these questions differently in different parts of the world, and the AI that will control their cars someday should be programmed differently, as well.

The MIT Media Lab just released details from what it calls its "Moral Machine" experiment. This is an online platform that helped real people to experience the moral dilemmas faced by autonomous vehicles.

Researchers used it to look at 40 million decisions in 10 languages, from millions of people in 233 countries and territories.

Now that's what we call Big Data.

On a global scale, the blue bars in the chart below show what people collectively value the most if an autonomous car must choose things to avoid. AI sensors should scan for strollers, kids, and pregnant women. They are top priorities to be avoided.

AI-preference-autonomous-vehicles

The purplish colored bars represent those deemed as most expendable if a self-driving car has to take evasive action. An overweight old man is less valued than most options, and the least important to be spared in an emergency are dogs, criminals, and cats. Apologies to the cats of the world for coming in as even less important than criminals!

Perhaps this makes you uncomfortable because it is an exercise in choosing who should die or be injured. These are, however, the kinds of programming decisions that will be built into autonomous vehicles.

And the MIT researchers found it gets even more complicated if you look at different regions of the globe. Some cultures revere seniors, others favor law followers (and can't stand jaywalkers, for example), and some feel more strongly than others that the car should protect the passengers it is carrying as a top priority.

This will be a very challenging problem for creators of autonomous vehicles. It's a situation that has not existed before.

The study's authors put it like this:

"Human drivers who die in crashes cannot report whether they were faced with a dilemma; and human drivers who survived a crash may not have realized they were in a dilemma situation."

The difference is that Artificial Intelligence will understand it is in a dilemma situation and must divide the risk among those it is carrying and those around it.

Someday, we will require our self-driving car to be a moral machine. But what, exactly, is that?

[MORE: Check out the Moral Machine experiment for yourself, published in Nature, the International Weekly Journal of Science.]

Comments