The present paper deals with monotonic and dual monotonic language learning from positive as well as from positive and negative examples. The three notions of monotonicity reect dierent formalizations of the requirement that the learner has to produce better and better generalizations when fed more and more data on the concept to be learned. The three versions of dual monotonicity describe the concept that the inference device has to produce specializations that t better and better to the target language. We characterize strong-monotonic, monotonic, weak-monotonic, dual strong-monotonic, dual monotonic, and monotonic & dual monotonic as well as nite language learning from positive data in terms of recursively generable nite sets. These characterizations provide a unifying framework for learning from positive data under the various monotonicity constraints. Moreover, they yield additional insight into the problem of what a natural learning algorithm should look like.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.