Autonomous Vehicles (AVs) are increasingly embraced around the world to advance smart mobility and more broadly, smart, and sustainable cities. Algorithms form the basis of decision-making in AVs, allowing them to perform driving tasks autonomously, efficiently, and more safely than human drivers and offering various economic, social, and environmental benefits. However, algorithmic decision-making in AVs can also introduce new issues that create new safety risks and perpetuate discrimination. We identify bias, ethics, and perverse incentives as key ethical issues in the AV algorithms' decision-making that can create new safety risks and discriminatory outcomes. Technical issues in the AVs' perception, decision-making and control algorithms, limitations of existing AV testing and verification methods, and cybersecurity vulnerabilities can also undermine the performance of the AV system. This article investigates the ethical and technical concerns surrounding algorithmic decision-making in AVs by exploring how driving decisions can perpetuate discrimination and create new safety risks for the public. We discuss steps taken to address these issues, highlight the existing research gaps and the need to mitigate these issues through the design of AV's algorithms and of policies and regulations to fully realise AVs' benefits for smart and sustainable cities.Sustainability 2019, 11, 5791 2 of 28 surroundings and algorithms that process the data, make, and execute driving decisions through the vehicle's actuators. The quality of data retrieved by the AV's sensors is critical for decision-making [21], and the efficiency, precision and reliability of decision-making algorithms allows AVs to surpass the typical human driver in performing driving tasks [22]. A major component in AVs is machine-learning (ML) [23] algorithms that continuously learn and adapt to new information, which is essential for responding to unexpected situations and providing on-demand transportation services [24]. Algorithms' independence from human input and ML's data-driven nature allows AVs to significantly reduce or eliminate human errors that have been responsible for 90% of road fatalities, such as speeding, alcohol impairment, distractions and induced fear [25,26].However, an overemphasis on technological solutions alone for economic development could risk neglecting social and environmental considerations and thus hinder true "smartness" [27]. Scholars have cautioned against rushing to develop smart mobility solutions such as AVs without being prepared to manage their potential "negative externalities" [1,28]. In particular, issues in algorithmic decision-making in AVs can have undesirable effects on safety and equity. Firstly, data mining processes in AVs are susceptible to biases that lead algorithms to prioritise the safety of certain groups of road users over others and thus, perpetuate discrimination [29,30]. Secondly, many scholars stress the need to design algorithms with ethical considerations to ensure that AVs make ethical driving dec...