2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06) 2006
DOI: 10.1109/focs.2006.51
|View full text |Cite
|
Sign up to set email alerts
|

New Results for Learning Noisy Parities and Halfspaces

Abstract: We address well-studied problems concerning the learnability of parities and halfspaces in the presence of classification noise.Learning of parities under the uniform distribution with random classification noise, also called the noisy parity problem is a famous open problem in computational learning. We reduce a number of basic problems regarding learning under the uniform distribution to learning of noisy parities. We show that under the uniform distribution, learning parities with adversarial classification… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
90
0

Year Published

2007
2007
2021
2021

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 98 publications
(96 citation statements)
references
References 40 publications
6
90
0
Order By: Relevance
“…Thus we may consider this equivalence as a new evidence of the hardness of attribute-efficient learning of parities from random examples only. This result together with a recent result of Feldman et al [FGKP06] implies equivalence of attribute-efficient learning of parities and learning of parities with random noise (for an appropriate transformation of parameters).…”
Section: Our Resultssupporting
confidence: 65%
See 2 more Smart Citations
“…Thus we may consider this equivalence as a new evidence of the hardness of attribute-efficient learning of parities from random examples only. This result together with a recent result of Feldman et al [FGKP06] implies equivalence of attribute-efficient learning of parities and learning of parities with random noise (for an appropriate transformation of parameters).…”
Section: Our Resultssupporting
confidence: 65%
“…FGKP06]. This allows us to relate the two learning problems directly, in particular, we obtain the following theorem.…”
Section: Learning Of Parities and Binary Linear Codesmentioning
confidence: 90%
See 1 more Smart Citation
“…Klivans and Sherstov (2006) have recently given the first representationindependent (cryptographic) hardness results for PAC learning intersections of halfspaces. Feldman et al (2006) have obtained closely related results. The only other relevant hardness results are for representation-dependent (proper) learning: if the learner's output hypothesis must be from a restricted class of functions (e.g., intersections of halfspaces), then the learning problem in question is NP-hard with respect to randomized reductions (Alekhnovich et al 2004).…”
mentioning
confidence: 60%
“…The goal of the learner is to construct a high-accuracy hypothesis function h, i.e., one which satisfies Pr[f (x) = h(x)] ≤ ǫ where the probability is with respect to the uniform distribution and ǫ is an error parameter given to the learning algorithm. Algorithms and hardness results in this framework have interesting connections with topics such as discrete Fourier analysis [Man94], circuit complexity [LMN93], noise sensitivity and influence of variables in Boolean functions [KKL88,BKS99,KOS04,OS07], coding theory [FGKP06], privacy [BLR08,KLN + 08], and cryptography [BFKL93,Kha95]. For these reasons, and because the model is natural and elegant in its own right, the uniform distribution learning model has been intensively studied for almost two decades.…”
Section: Background and Motivationmentioning
confidence: 99%