Vehicle problem scenario of self-driving car. This material relates to a paper that appeared in the 24 June 2016, issue of Science, published by AAAS. The paper, by Jean-Francois Bonnefon at University of Toulouse in Toulouse, France, and colleagues was titled, “The social dilemma of autonomous vehicles.”
CREDIT
Iyad Rahwan

Many of us as car drivers face the possibility of making a split second decision about the lesser of two evils in a crash scenario. Most of us probably don’t have these rules written in stone somewhere, but we would make the tough call on what to do on the fly based on what we thought gave us the best outcome.  But an autonomous vehicle really does have to follow its’ programming rules in these situations, and make profound choices about whose lives take priority.  Autonomous carmakers must program these moral and ethical decisions meticulously and make them clearly known to the public before vehicles should be given full autonomy.

Where we are today

As things stand even putting aside crisis situations driverless vehicles are already making profound choices implicitly about risks, every time the car makes a complex maneuver.  But the paradox of whose lives matter more in an unavoidable crash, where potentially killing the pedestrian versus, swerving and risking the passenger(s) in the vehicle and perhaps those in other vehicles(s) is profoundly deep and murky waters to consider.   The wall for artificial intelligence decision making like this, is that it has been assumed that moral decisions are strongly context dependent and therefore cannot be modeled algorithmically.  But a new research paper Virtual Reality experiments investigating human behavior and moral assessments, may change that thinking.

The Study

To peer deep into the human mind to understand human morals and instincts researcher from the Institute of Cognitive Science at the University of Osnabrack, used immersive virtual reality to allow the authors to study human behavior in simulated crisis road traffic scenarios.  The results were conceptualized by statistical models leading to rules, with an associated degree of explanatory power to explain the observed behavior.

We can model human behavior in autonomous vehicles

This new study in the scope of unavoidable traffic collisions, showed that moral decisions can be boiled down to a rather simple value-of-life-based model held by each participant.  This implies that human moral behavior can be modeled algorithmically, once there is agreement of rules.  Of course a lot of work would need to be done on what moral behavior is favored as each of us likely will choose differently based on our individual priorities. The study’s authors noting that autonomous cars are just the beginning.  Robots will eventually be deployed in other critical environments like hospitals where life and death choices sometimes need to be made. The authors warn that we are now at the dawn of a new era with the need for setting up clear standardized rules, otherwise machines will be making decisions that may be regrettable.