1991
DOI: 10.1109/72.80299
|View full text |Cite
|
Sign up to set email alerts
|

Query-based learning applied to partially trained multilayer perceptrons

Abstract: An approach is presented for query-based neural network learning. A layered perceptron partially trained for binary classification is considered. The single-output neuron is trained to be either a zero or a one. A test decision is made by thresholding the output at, for example, one-half. The set of inputs that produce an output of one-half forms the classification boundary. The authors adopted an inversion algorithm for the neural network that allows generation of this boundary. For each boundary point, the c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
1
3

Year Published

1994
1994
2016
2016

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 137 publications
(52 citation statements)
references
References 10 publications
0
48
1
3
Order By: Relevance
“…Uncertainty sampling is a term invented by Lewis and Gale (1994), though the ideas can be traced back to the query methods of Hwang et al (1991) and Baum (1991). We discuss the Lewis and Gale variant since it is widely implemented and general to probabilistic classifiers such as logistic regression.…”
Section: Uncertainty Samplingmentioning
confidence: 99%
“…Uncertainty sampling is a term invented by Lewis and Gale (1994), though the ideas can be traced back to the query methods of Hwang et al (1991) and Baum (1991). We discuss the Lewis and Gale variant since it is widely implemented and general to probabilistic classifiers such as logistic regression.…”
Section: Uncertainty Samplingmentioning
confidence: 99%
“…For example, although noise was unpredictable and led to wildly varying target signals for the predictor, in the long run these signals did not change the adaptive predictor parameters much, and the predictor of predictor changes was able to learn this. A standard RL algorithm [114,33,109] was fed with curiosity reward signals proportional to the expected long-term predictor changes, and thus tried to maximize information gain [16,31,38,51,14] within the given limitations. In fact, we may say that the system tried to maximize an approximation of the (discounted) sum of the expected first derivatives of the data's subjective predictability, thus also maximizing an approximation of the (discounted) sum of the expected changes of the data's subjective compressibility.…”
Section: Reward For Compression Progress Through Predictor Improvemenmentioning
confidence: 99%
“…The probability of the same score can be derived by adding (12) and (10), since the score will remain the same if the mutation is either deleterious or neutral.…”
Section: Markov Searchmentioning
confidence: 99%
“…An oracle [6], [11], [12], [13], [21] is a common source of information in assisted search. The computational overhead of the oracle often dominates the time required for a search.…”
Section: Introductionmentioning
confidence: 99%