Autonomous systems are being developed and deployed in situations that may require some degree of ethical decision-making ability. As a result, research in machine ethics has proliferated in recent years. This work has included using moral dilemmas as validation mechanisms for implementing decision-making algorithms in ethically-loaded situations. Using trolley-style problems in the context of autonomous vehicles as a case study, I argue (1) that this is a misapplication of philosophical thought experiments because (2) it fails to appreciate the purpose of moral dilemmas, and (3) this has potentially catastrophic consequences; however, (4) there are uses of moral dilemmas in machine ethics that are appropriate and the novel situations that arise in a machine-learning context can shed some light on philosophical work in ethics.
I consider how complex logical operations might self-assemble in a signalling-game context via composition of simpler underlying dispositions. On the one hand, agents may take advantage of pre-evolved dispositions; on the other hand, they may co-evolve dispositions as they simultaneously learn to combine them to display more complex behaviour. In either case, the evolution of complex logical operations can be more efficient than evolving such capacities from scratch. Showing how complex phenomena like these might evolve provides an additional path to the possibility of evolving more or less rich notions of compositionality. This helps provide another facet of the evolutionary story of how sufficiently rich, human-level cognitive or linguistic capacities may arise from simpler precursors. 1Signalling and Self-assembly2Simple Unary Logic Games3Composing Unary Functions for Binary Inputs 3.1Utilizing pre-evolved dispositions3.2Co-evolving logical dispositions3.3Learning appropriate outputs3.4Taking account of the full state-space of unary games3.5Role-free composition4Discussion 4.1Efficacy and efficiency of learning complex dispositions4.2Other binary operations4.3To infinity and beyond5Conclusion
Sometimes retracted or thoroughly refuted scientific information is used and propagated long after it is understood to be misleading. Likewise, sometimes retracted news items spread and persist, even after it has been publicly established that they are false. In this paper, we use agent-based models of epistemic networks to explore the dynamics of retraction. In particular, we focus on why false beliefs might persist, even in the face of retraction. We find that in many cases those who have received false information simply fail to receive retractions due to social dynamics. Surprisingly, we find that in some cases delaying retraction may increase its impact. We also find that retractions are most successful when issued by the original source of misinformation, rather than a separate source.
We are concerned here with how structural properties of language may come to reflect features of the world in which it evolves. As a concrete example, we will consider how a simple term language might evolve to support the principle of indifference over state descriptions in that language. The point is not that one is justified in applying the principle of indifference to state descriptions in natural language. Instead, it is that one should expect a language that has evolved in the context of facilitating successful action to reflect probabilistic features of the world in which it evolved.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.