The early literature on epistemic logic in philosophy focused on reasoning about the knowledge or belief of a single agent, especially on controversies about "introspection axioms" such as the 4 and 5 axioms. By contrast, the later literature on epistemic logic in computer science and game theory has focused on multi-agent epistemic reasoning, with the single-agent 4 and 5 axioms largely taken for granted. In the relevant multi-agent scenarios, it is often important to reason about what agent A believes about what agent B believes about what agent A believes; but it is rarely important to reason just about what agent A believes about what agent A believes. This raises the question of the extent to which single-agent introspection axioms actually matter for multi-agent epistemic reasoning. In this paper, we formalize and answer this question. To formalize the question, we first define a set of multi-agent formulas that we call agent-alternating formulas, including formulas like 2 a 2 b 2 a p but not formulas like 2 a 2 a p. We then prove, for the case of belief, that if one starts with multi-agent K or KD, then adding both the 4 and 5 axioms (or adding the B axiom) does not allow the derivation of any new agent-alternating formulas-in this sense, introspection axioms do not matter. By contrast, we show that such conservativity results fail for knowledge and multi-agent KT, though they hold with respect to a smaller class of agent-nonrepeating formulas.
Reasoning about what other people know is an important cognitive ability, known as epistemic reasoning, which has fascinated psychologists, economists, and logicians. In this paper, we propose a computational model of humans’ epistemic reasoning, including higher-order epistemic reasoning—reasoning about what one person knows about another person’s knowledge—that we test in an experiment using a deductive card game called “Aces and Eights”. Our starting point is the model of perfect higher-order epistemic reasoners given by the framework of dynamic epistemic logic. We modify this idealized model with bounds on the level of feasible epistemic reasoning and stochastic update of a player’s space of possibilities in response to new information. These modifications are crucial for explaining the variation in human performance across different participants and different games in the experiment. Our results demonstrate how research on epistemic logic and cognitive models can inform each other.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.