2021
DOI: 10.48550/arxiv.2107.08444
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Theory of PAC Learnability of Partial Concept Classes

Abstract: We extend the classical theory of PAC learning in a way which allows to model a rich variety of practical learning tasks where the data satisfy special properties that ease the learning process. For example, tasks where the distance of the data from the decision boundary is bounded away from zero, or tasks where the data lie on a lower dimensional surface. The basic and simple idea is to consider partial concepts: these are functions that can be undefined on certain parts of the space. When learning a partial … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 37 publications
(70 reference statements)
0
6
0
Order By: Relevance
“…Such results have applications to the so-called probablyapproximately-correct (PAC) models of machine learning; concerning PAC models, see e.g. [12,6,8,1].…”
Section: Summary and Discussionmentioning
confidence: 98%
See 3 more Smart Citations
“…Such results have applications to the so-called probablyapproximately-correct (PAC) models of machine learning; concerning PAC models, see e.g. [12,6,8,1].…”
Section: Summary and Discussionmentioning
confidence: 98%
“…(Concerning the strictness of the inequality P(X E X) > 1/4 in (4), here one may recall that the inequality P(X > E X) 1/4 in ( 1) is strict unless n = 2 and p = 1/2 -in which latter case condition (5) fails to hold.) (ii) Instead of the probability P(X E X) in (4), we have the (possibly) smaller probability P(X > E X) in (1). Improvement (i) and the optimality of the constant factor c are illustrated in Figure 1, showing the graphs • { p, P(X n,p > np) : 1/n p < 1} (solid)…”
Section: Summary and Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…More recently, the application of orthopairs in CLT has been studied in the setting of adversarial machine learning [22], as well as to characterize the generalization 1 Here realizability means that ∃h ∈ H s.t. L D (h) = 0. capacity of hypothesis classes under generative assumptions [23]. We note, however, that even though the above mentioned work and the framework we study in this article rely on the representation formalism of orthopairs, the aims of these three frameworks are essentially orthogonal, also in terms of the mathematical techniques adopted: Indeed, while the three-way learning framework we study relies on a generalization of the ERM paradigm, the frameworks studied in [23], [22] rely on a transductive learning approach.…”
Section: Theorem 1 Let H Be a Hypothesis Class With Vc Dimensionmentioning
confidence: 99%