1996
DOI: 10.1017/s096354830000198x
|View full text |Cite
|
Sign up to set email alerts
|

Valid Generalisation from Approximate Interpolation

Abstract: Let H and C be sets of functions from domain X to . We say that H validly generalises C from approximate interpolation if and only if for each η > 0 and , δ ∈ (0, 1) there is m 0 (η, , δ) such that for any function t ∈ C and any probability distribution P on X, if m ≥ m 0 then with P m -probability at least 1 − δ, a sample x = (x 1 , x 2 , . . . , x m ) ∈ X m satisfies ∀h ∈ H, |h(We find conditions that are necessary and sufficient for H to validly generalise C from approximate interpolation, and we obtain bou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

1996
1996
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 23 publications
0
5
0
Order By: Relevance
“…The sample complexity upper bound in Theorem 19 increases at least as 1Â= 4 . It seems plausible that this rate is excessive; perhaps it is an artifact of the use of Jensen's inequality in the proof.…”
Section: Agnostic Learningmentioning
confidence: 96%
See 2 more Smart Citations
“…The sample complexity upper bound in Theorem 19 increases at least as 1Â= 4 . It seems plausible that this rate is excessive; perhaps it is an artifact of the use of Jensen's inequality in the proof.…”
Section: Agnostic Learningmentioning
confidence: 96%
“…Natarajan [20] considers the problem of learning a class of real-valued functions in the presence of bounded observation noise and presents sufficient conditions for learnability. (Theorem 2 in [4] shows that these conditions are not necessary in our setting.) Merhav and Feder [18], and Auer, Long, Maass, and Woeginger [6] study function learning in a worst-case setting.…”
Section: Introductionmentioning
confidence: 94%
See 1 more Smart Citation
“…Then an estimate of the effect f that is ϵ-close to f * with probability 1-δ 1 indeed yields a solution to the original regression problem. The bound thus follows from Theorem 3 of Anthony et al (1996).…”
Section: Theoretical Analysismentioning
confidence: 91%
“…This is because unlike a binary classifier, which localizes errors on specific examples, a real-valued hypothesis can spread its error evenly over the entire sample, and it will not be affected by reweighting. The (η, γ)-weak learner, which has appeared, among other works, in Anthony et al [1996], Simon [1997], Avnimelech and Intrator [1999], Kégl [2003], gets around this difficulty -but provable general constructions of such learners have been lacking. Likewise, the heart of our sample compression engine, MedBoost, has been widely in use since Freund and Schapire [1997] in various guises.…”
Section: Related Workmentioning
confidence: 99%