2019
DOI: 10.1088/0253-6102/71/2/243
|View full text |Cite
|
Sign up to set email alerts
|

Active Online Learning in the Binary Perceptron Problem

Abstract: The binary perceptron is the simplest artificial neural network formed by N input units and one output unit, with the neural states and the synaptic weights all restricted to ±1 values. The task in the teacher-student scenario is to infer the hidden weight vector by training on a set of labeled patterns. Previous efforts on the passive learning mode have shown that learning from independent random patterns is quite inefficient. Here we consider the active online learning mode in which the student designs every… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 51 publications
0
4
0
Order By: Relevance
“…The test error for another student perceptron learning from this training set then scales as a power law with exponent −1 for such data. Such perceptrons have also been analyzed in an active learning setting where the learner is free to design any new input to be labeled [27,28], rather than choose from a fixed set of inputs, as in data-pruning. Recent work [29] has analyzed this scenario but focused on message passing algorithms that are tailored to the case of Gaussian inputs and perceptrons, and are hard to generalize to real world settings.…”
Section: Statistical Mechanics Of Perceptron Learningmentioning
confidence: 99%
“…The test error for another student perceptron learning from this training set then scales as a power law with exponent −1 for such data. Such perceptrons have also been analyzed in an active learning setting where the learner is free to design any new input to be labeled [27,28], rather than choose from a fixed set of inputs, as in data-pruning. Recent work [29] has analyzed this scenario but focused on message passing algorithms that are tailored to the case of Gaussian inputs and perceptrons, and are hard to generalize to real world settings.…”
Section: Statistical Mechanics Of Perceptron Learningmentioning
confidence: 99%
“…is the i-th entry of the ℓ-th composite sample vector [23,24,25,26,27]. The parameter θ µ ∈ ±1 is the sign of µ which only affects the global sign of the inferred configuration σ 0 .…”
Section: Perceptron-learningmentioning
confidence: 99%
“…We should be able to achieve almost perfect inference of σ 0 by setting X ≥ 5N [27]. The total number of sampled independent configurations is then of order O(N 2 ).…”
Section: Perceptron-learningmentioning
confidence: 99%
“…Active learning was previously studied in the context of the teacher-student perceptron problem. Best known is the line of work on Query by Committee [4,18,19], dealing with the membership based active learning setting, i.e. where the samples are picked one by one into the training set and can be absolutely arbitrary N -dimensional vectors.…”
Section: Definition Of the Problem And Related Workmentioning
confidence: 99%