Building on previous work [4, 5] that bridged Formal Learning Theory and Dynamic Epistemic Logic in a topological setting, we introduce a Dynamic Logic for Learning Theory (DLLT), extending Subset Space Logics [17, 9] with dynamic observation modalities [o]ϕ, as well as with a learning operator L(#» o), which encodes the learner's conjecture after observing a finite sequence of data #» o. We completely axiomatise DLLT, study its expressivity and use it to characterise various notions of knowledge, belief, and learning.
Building on previous work [4,5] that bridged Formal Learning Theory and Dynamic Epistemic Logic in a topological setting, we introduce a Dynamic Logic for Learning Theory (DLLT), extending Subset Space Logics [17,9] with dynamic observation modalities [o]ϕ, as well as with a learning operator L( #» o ), which encodes the learner's conjecture after observing a finite sequence of data #» o . We completely axiomatise DLLT, study its expressivity and use it to characterise various notions of knowledge, belief, and learning.
We introduce Arbitrary Public Announcement Logic with Memory (APALM), obtained by adding to the models a ‘memory’ of the initial states, representing the information before any communication took place (“the prior”), and adding to the syntax operators that can access this memory. We show that APALM is recursively axiomatizable (in contrast to the original Arbitrary Public Announcement Logic, for which the corresponding question is still open). We present a complete recursive axiomatization, that includes a natural finitary rule, and study this logic’s expressivity and the appropriate notion of bisimulation. We then examine Group Announcement Logic with Memory (GALM), the extension of APALM obtained by adding to its syntax group announcement operators, and provide a complete finitary axiomatization (again in contrast to the original Group Announcement Logic, for which the only known axiomatization is infinitary). We also show that, in the memory-enhanced context, there is a natural reduction of the so-called coalition announcement modality to group announcements (in contrast to the memory-free case, where this natural translation was shown to be invalid).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.