Description
As urban populations increase, so does the demand for innovative transportation solutions which reduce traffic congestion, reduce pollution, and reduce inequalities by providing mobility for all kinds of people. One emerging solution is self-driving vehicles, which have been coined as a safer driving method by reducing fatalities due to driving accidents. While completely automated vehicles are still in the testing and development phase, the United Nations predict their full debut by 2030 [1]. While many resources are focusing their time on creating the technology to execute decisions such as the controls, communications, and sensing, engineers often leave ethics as an afterthought. The truth is autonomous vehicles are imperfect systems that will still experience possible crash scenarios even if all systems are working perfectly. Because of this, ethical machine learning must be considered and implemented to avoid an ethical catastrophe which could delay or completely halt future autonomous vehicle development. This paper presents an experiment for determining a more complete view of human morality and how this translates into ideal driving behaviors.
This paper analyzes responses to deviated Trolley Problem scenarios [5] in a simulated driving environment and still images from MIT’s moral machine website [8] to better understand how humans respond to various crashes. Also included is participants driving habits and personal values, however the bulk of that analysis is not included here. The results of the simulation prove that for the most part in driving scenarios, people would rather sacrifice themselves over people outside of the vehicle. The moral machine scenarios prove that self-sacrifice changes as the trend to harm one’s own vehicle was not so strong when passengers were introduced. Further defending this idea is the importance placed on Family Security over any other value.
Suggestions for implementing ethics into autonomous vehicle crashes stem from the results of this experiment but are dependent on more research and greater sample sizes. Once enough data is collected and analyzed, a moral baseline for human’s moral domain may be agreed upon, quantified, and turned into hard rules governing how self-driving cars should act in different scenarios. With these hard rules as boundary conditions, artificial intelligence should provide training and incremental learning for scenarios which cannot be determined by the rules. Finally, the neural networks which make decisions in artificial intelligence must move from their current “black box” state to something more traceable. This will allow researchers to understand why an autonomous vehicle made a certain decision and allow tweaks as needed.
This paper analyzes responses to deviated Trolley Problem scenarios [5] in a simulated driving environment and still images from MIT’s moral machine website [8] to better understand how humans respond to various crashes. Also included is participants driving habits and personal values, however the bulk of that analysis is not included here. The results of the simulation prove that for the most part in driving scenarios, people would rather sacrifice themselves over people outside of the vehicle. The moral machine scenarios prove that self-sacrifice changes as the trend to harm one’s own vehicle was not so strong when passengers were introduced. Further defending this idea is the importance placed on Family Security over any other value.
Suggestions for implementing ethics into autonomous vehicle crashes stem from the results of this experiment but are dependent on more research and greater sample sizes. Once enough data is collected and analyzed, a moral baseline for human’s moral domain may be agreed upon, quantified, and turned into hard rules governing how self-driving cars should act in different scenarios. With these hard rules as boundary conditions, artificial intelligence should provide training and incremental learning for scenarios which cannot be determined by the rules. Finally, the neural networks which make decisions in artificial intelligence must move from their current “black box” state to something more traceable. This will allow researchers to understand why an autonomous vehicle made a certain decision and allow tweaks as needed.
Details
Title
- An Investigation of Morality in Driving Situations as a Basis for Determining Autonomous Vehicle Ethics
Contributors
- Beaulieu, Natalie Nicole (Author)
- Berman, Spring (Thesis director)
- Cooke, Nancy (Committee member)
- Watts College of Public Service & Community Solut (Contributor)
- School for Engineering of Matter,Transport & Enrgy (Contributor)
- Mechanical and Aerospace Engineering Program (Contributor)
- Barrett, The Honors College (Contributor)
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2019-05
Subjects
Resource Type
Collections this item is in