How should driverless vehicles respond to situations of unavoidable personal harm? This paper takes up the case of self-driving cars as a prominent example of algorithmic moral decision-making, an emergent type of morality that is evolving at a high pace in a digitised business world. As its main contribution, it juxtaposes dilemma decision situations relating to ethical crash algorithms for autonomous cars to two edge cases: the case of manually driven cars facing real-life, mundane accidents, on the one hand, and the dilemmatic situation in theoretically constructed trolley cases, on the other. The paper identifies analogies and disanalogies between the three cases with regard to decision makers, decision design, and decision outcomes. The findings are discussed from the angle of three perspectives: aspects where analogies could be found, those where the case of self-driving cars has turned out to lie in between both edge cases, and those where it entirely departs from either edge case. As a main result, the paper argues that manual driving as well as trolley cases are suitable points of reference for the issue of designing ethical crash algorithms only to a limited extent. Instead, a fundamental epistemic and conceptual divergence of dilemma decision situations in the context of self-driving cars and the used edge cases is substantiated. Finally, the areas of specific need for regulation on the road to introducing autonomous cars are pointed out and related thoughts are sketched through the lens of the humanistic paradigm.