This paper explores the analysis of ability, where ability is to be understood in the epistemic sense—in contrast to what might be called a causal sense. There are plenty of cases where an agent is able to perform an action that guarantees a given result even though she does not know which of her actions guarantees that result. Such an agent possesses the causal ability but lacks the epistemic ability. The standard analysis of such epistemic abilities relies on the notion of action types—as opposed to action tokens—and then posits that an agent has the epistemic ability to do something if and only if there is an action type available to her that she knows guarantees it. We show that these action types are not needed: we present a formalism without action types that can simulate analyzes of epistemic ability that rely on action types. Our formalism is a standard epistemic extension of the theory of “seeing to it that”, which arose from a modal tradition in the logic of action.
The formalization of action and obligation using logic languages is a topic of increasing relevance in the field of ethics for AI. Having an expressive syntactic and semantic framework to reason about agents' decisions in moral situations allows for unequivocal representations of components of behavior that are relevant when assigning blame (or praise) of outcomes to said agents. Two very important components of behavior in this respect are belief and belief-based action. In this work we present a logic of doxastic oughts by extending epistemic deontic stit theory with beliefs. On one hand, the semantics for formulas involving belief operators is based on probability measures. On the other, the semantics for doxastic oughts relies on a notion of optimality, and the underlying choice rule is maximization of expected utility. We introduce an axiom system for the resulting logic, and we refer to its soundness, completeness, and decidability results. These results are significant in the line of research that intends to use proof systems of epistemic, doxastic, and deontic logics to help in the testing of ethical behavior of AI through theorem-proving and model-checking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.