2005
DOI: 10.1016/j.neunet.2005.06.016
|View full text |Cite
|
Sign up to set email alerts
|

Incremental learning of feature space and classifier for face recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
72
0
1

Year Published

2006
2006
2013
2013

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 106 publications
(73 citation statements)
references
References 13 publications
0
72
0
1
Order By: Relevance
“…Other approaches to the "stability-plasticity dilemma" were proposed by Polikar et al (2001) and Ozawa et al (2005). Polikar et al (2001) proposed the "Learn++" approach that is based on the boosting (Schapire, 1990) technique.…”
Section: Life-long Learning Architecturesmentioning
confidence: 99%
See 1 more Smart Citation
“…Other approaches to the "stability-plasticity dilemma" were proposed by Polikar et al (2001) and Ozawa et al (2005). Polikar et al (2001) proposed the "Learn++" approach that is based on the boosting (Schapire, 1990) technique.…”
Section: Life-long Learning Architecturesmentioning
confidence: 99%
“…This makes the method unsuitable for our desired interactive learning capability. In contrast to this Ozawa et al (2005) proposed to store representative input-output pairs into a long-term memory for stabilizing an incremental learning radial basis function (RBF) network. Additionally it also accounts for a feature selection mechanism based on incremental principal component analysis, but no classspecific feature selection is applied to efficiently separate co-occurring categories.…”
Section: Life-long Learning Architecturesmentioning
confidence: 99%
“…In the incremental learning phase, the learning algorithm of AL-RAN is basically the same as that of RAN-LTM [7] except that RBF widths are automatically determined or adjusted in an online fashion. Let us explain how RBF widths are determined from incoming data.…”
Section: Incremental Learning Phasementioning
confidence: 99%
“…Although RBFN has mainly been used as batch learning, the significance of its extension to incremental learning is growing from a practical point of view [7,8]. Especially, one-pass incremental learning [9] is an important concept for large-scale high-dimensional data.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation