Normative ethics has been shown to help automated planners take ethically aware decisions. However, state- of-the-art planning technologies don’t provide a sim- ple and direct way to support ethical features. Here, we propose a new theoretical framework based on a con- struct, called ethical rule, that allows to model prefer- ences amongst ethically charged features and capture various ethical theories. We show how the framework can model and combine the strengths of these theories. Then, we demonstrate that classical planning domains extended with ethical rules can be compiled into soft goals in PDDL.
CBR systems leverage past experiences to make decisions. Recently, the AI community has taken an interest in making CBR systems explainable. Logic-based frameworks make answers straightforward to explain. However, they struggle in the face of conflicting information, unlike probabilistic techniques. We show how probabilistic inductive logic programming (PILP) can be applied in CBR systems to make transparent decisions combining logic and probabilities. Then, we demonstrate how our approach can be applied in scenarios presenting uncertainty.
We place ourselves in the context of representing knowledge inside the cognitive model of a robot that needs to reason about its actions. We propose a new ontological transformation system able to model different levels of knowledge granularity. This model will allow to unfold the sequences of actions the robot performs for better scrutability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.