2005
DOI: 10.1007/11503415_26
|View full text |Cite
|
Sign up to set email alerts
|

Variations on U-Shaped Learning

Abstract: The paper deals with the following problem: is returning to wrong conjectures necessary to achieve full power of algorithmic learning? Returning to wrong conjectures complements the paradigm of U-shaped learning [3,7,9,24,29] when a learner returns to old correct conjectures. We explore our problem for classical models of learning in the limit from positive data: explanatory learning (when a learner stabilizes in the limit on a correct grammar) and behaviourally correct learning (when a learner stabilizes in t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2006
2006
2023
2023

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…Furthermore, showing that U‐shapes are unnecessary for iterative learning of classes consisting of infinite languages turned out to be significantly easier than obtaining the result for arbitrary classes. In the context of Explanatory Learning, if a class does not contain an extension of every finite set , then that class can be learned by a decisive learner (Carlucci et al., )!…”
Section: On the Proof Techniquesmentioning
confidence: 99%
See 2 more Smart Citations
“…Furthermore, showing that U‐shapes are unnecessary for iterative learning of classes consisting of infinite languages turned out to be significantly easier than obtaining the result for arbitrary classes. In the context of Explanatory Learning, if a class does not contain an extension of every finite set , then that class can be learned by a decisive learner (Carlucci et al., )!…”
Section: On the Proof Techniquesmentioning
confidence: 99%
“…In Carlucci et al. (), a number of variants of nonmonotonic learning criteria have been investigated in the context of Gold's model. In particular, the following restrictions on the learner's behavior have been studied: (1) no return to previously abandoned wrong hypotheses, (2) no return to overinclusive hypotheses, (3) no return to overgeneralizing hypotheses, and (4) no inverted U‐shapes.…”
Section: Other Forms Of Non‐monotonic Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…A learner is said to be U-shaped on L (see [3,7,8]), if on some text T for L, for some n, m, k with n < m < k, M (T [n]) and M (T [k]) are grammars for L (in the numbering being used as hypotheses space), but M (T [m]) is not a grammar for L. A learner is said to be non U-shaped on L if it is not Ushaped on L. A learner NUShI-identifies a class L if it I-identifies L and is non U-shaped on each L ∈ L.…”
Section: Explanatory Learning With Additional Constraintsmentioning
confidence: 99%
“…." [3,7,8]. We denote the criteria of prudent, confident, consistent and non U-shaped learning with PrudentTxtEx, ConfTxtEx, ConsTxtEx and NUShTxtEx, respectively; accordingly for restricted variants.…”
Section: Introductionmentioning
confidence: 99%