Automated vehicles have received much attention recently, particularly the Defense Advanced Research Projects Agency Urban Challenge vehicles, Google's self-driving cars, and various others from auto manufacturers. These vehicles have the potential to reduce crashes and improve roadway efficiency significantly by automating the responsibilities of the driver. Still, automated vehicles are expected to crash occasionally, even when all sensors, vehicle control components, and algorithms function perfectly. If a human driver is unable to take control in time, a computer will be responsible for precrash behavior. Unlike other automated vehicles, such as aircraft, in which every collision is catastrophic, and unlike guided track systems, which can avoid collisions only in one dimension, automated roadway vehicles can predict various crash trajectory alternatives and select a path with the lowest damage or likelihood of collision. In some situations, the preferred path may be ambiguous. The study reported here investigated automated vehicle crashing and concluded the following: (a) automated vehicles would almost certainly crash, (b) an automated vehicle's decisions that preceded certain crashes had a moral component, and (c) there was no obvious way to encode complex human morals effectively in software. The paper presents a three-phase approach to develop ethical crashing algorithms; the approach consists of a rational approach, an artificial intelligence approach, and a natural language requirement. The phases are theoretical and should be implemented as the technology becomes available.
The operation of traffic signals is currently limited by the data available from traditional point sensors. Point detectors can provide only limited vehicle information at a fixed location. The most advanced adaptive control strategies are often not implemented in the field because of their operational complexity and high-resolution detection requirements. However, a new initiative known as connected vehicles allows the wireless transmission of the positions, headings, and speeds of vehicles for use by the traffic controller. A new traffic control algorithm, the predictive microscopic simulation algorithm, which uses these new, more robust data, was developed. The decentralized, fully adaptive traffic control algorithm uses a rolling-horizon strategy in which the phasing is chosen to optimize an objective function over a 15-s period in the future. The objective function uses either delay only or a combination of delay, stops, and decelerations. To measure the objective function, the algorithm uses a microscopic simulation driven by present vehicle positions, headings, and speeds. The algorithm is relatively simple, does not require point detectors or signal-to-signal communication, and is completely responsive to immediate vehicle demands. To ensure drivers' privacy, the algorithm does not store individual or aggregate vehicle locations. Results from a simulation showed that the algorithm maintained or improved performance compared with that of a state-of-the-practice coordinated actuated timing plan optimized by Synchro at low and midlevel volumes, but that performance worsened under saturated and oversaturated conditions. Testing also showed that the algorithm had improved performance during periods of unexpected high demand and the ability to respond automatically to year-to-year growth without retiming.
Road vehicle travel at a reasonable speed involves some risk, even when using computer-controlled driving with failure-free hardware and perfect sensing. A fully-automated vehicle must continuously decide how to allocate this risk without a human driver's oversight. These are ethical decisions, particularly in instances where an automated vehicle cannot avoid crashing. In this chapter, I introduce the concept of moral behavior for an automated vehicle, argue the need for research in this area through responses to anticipated critiques, and discuss relevant applications from machine ethics and moral modeling research. Ethical Decision Making for Automated VehiclesVehicle automation has progressed rapidly this millennium, mirroring improvements in machine learning, sensing, and processing. Media coverage often focuses on the anticipated safety benefits from automation, as computers are expected to be more attentive, precise, and predictable than human drivers. Mentioned less often are the novel problems from automated vehicle crash. The first problem is liability, as it is currently unclear who would be at fault if a vehicle crashed while self-driving. The second problem is the ability of an automated vehicle to make ethically-complex decisions when driving, particularly prior to a crash. This chapter focuses on the second problem, and the application of machine ethics to vehicle automation. Driving at any significant speed can never be completely safe. A loaded tractor trailer at 100 km/hr requires eight seconds to come to a complete stop, and a passenger car requires three seconds [1]. Truly safe travel requires accurate predictions of other vehicle behavior over this time frame, something that is simply not possible given the close proximities of road vehicles.To ensure its own safety, an automated vehicle must continually assess risk: the risk of traveling a certain speed on a certain curve, of crossing the centerline to
As automated vehicles receive more attention from the media, there has been an equivalent increase in the coverage of the ethical choices a vehicle may be forced to make in certain crash situations with no clear safe outcome. Much of this coverage has focused on a philosophical thought experiment known as the "trolley problem," and substituting an automated vehicle for the trolley and the car's software for the bystander. While this is a stark and straightforward example of ethical decision making for an automated vehicle, it risks marginalizing the entire field if it is to become the only ethical problem in the public's mind. In this chapter, I discuss the shortcomings of the trolley problem, and introduce more nuanced examples that involve crash risk and uncertainty. Risk management is introduced as an alternative approach, and its ethical dimensions are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.