Bad decisions can have dire consequences. From high exposure events such as an oil spill or a plane crash, to the smaller scale drama of a patient who dies on the operating table, the unavoidable question soon follows: Was this mechanical failure or human error? Yet, in a society where people increasingly base their decisions on autonomous systems such as search engines, recommender systems, or social media, the distinction becomes blurred. Although these systems are based on algorithms (less material, but nonetheless mechanical) people will have to process and consider the provided information, thus becoming the weakest link in the decision chain. In general, mechanical failure, once discovered, seems more easily addressed than human error. So if autonomous systems could be made aware of how humans judge information, they could become more judicious in advising humans, and more proactive in the way they present their information. Currently this is not the case. To change this, we have looked into decades of research about human judgement (For the case of how judgement of a particular system is shaped by some of its properties see Sect. 7.6 of this book.). We found a whole range of human judgement that deviates substantially from what would be normatively correct according to logic and probability theory. As example take the famous experiment in which Tversky and Kahneman [35] presented participants with the following text:Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also