Ethical Decision Making During Automated Vehicle Crashes

10/30/2020
by   Noah Goodall, et al.
0

Automated vehicles have received much attention recently, particularly the DARPA Urban Challenge vehicles, Google's self-driving cars, and various others from auto manufacturers. These vehicles have the potential to significantly reduce crashes and improve roadway efficiency by automating the responsibilities of the driver. Still, automated vehicles are expected to crash occasionally, even when all sensors, vehicle control components, and algorithms function perfectly. If a human driver is unable to take control in time, a computer will be responsible for pre-crash behavior. Unlike other automated vehicles–such as aircraft, where every collision is catastrophic, and guided track systems, which can only avoid collisions in one dimension–automated roadway vehicles can predict various crash trajectory alternatives and select a path with the lowest damage or likelihood of collision. In some situations, the preferred path may be ambiguous. This study investigates automated vehicle crashing and concludes the following: (1) automated vehicles will almost certainly crash, (2) an automated vehicle's decisions preceding certain crashes will have a moral component, and (3) there is no obvious way to effectively encode complex human morals in software. A three-phase approach to developing ethical crashing algorithms is presented, consisting of a rational approach, an artificial intelligence approach, and a natural language requirement. The phases are theoretical and should be implemented as the technology becomes available.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset