As agents moving through an environment that includes a range of other road users-from pedestrians and bicyclists to other human or automated drivers-automated vehicles continuously interact with the humans around them. The nature of these interactions is a result of the programming in the vehicle and the priorities placed there by the programmers. Just as human drivers display a range of driving styles and preferences, automated vehicles represent a broad canvas on which the designers can craft the response to different driving scenarios. These scenarios can be dramatic, such as plotting a trajectory in a dilemma situation when an accident is unavoidable, or more routine, such as determining a proper following distance from the vehicle ahead or deciding how much space to give a pedestrian standing at the corner. In all cases, however, the behavior of the vehicle and its control algorithms will ultimately be judged not by statistics or test track performance but by the standards and ethics of the society in which they operate. In the literature on robot ethics, it remains arguable whether artificial agents without free will can truly exhibit moral behavior [1]. However, it seems certain that other road users and society will interpret the actions of automated vehicles and the priorities placed by their programmers through an ethical lens. Whether in a court of law or the court of public opinion, the control algorithms that determine the actions of automated vehicles will be subject to close scrutiny after the fact if they result in injury or damage. In a less dramatic, if no less important, manner, the way these vehicles move through the social interactions that define traffic on a daily basis will strongly influence their societal acceptance. This places a considerable responsibility on the programmers of automated