2012 IEEE Conference on Evolving and Adaptive Intelligent Systems 2012
DOI: 10.1109/eais.2012.6232796
|View full text |Cite
|
Sign up to set email alerts
|

A property of learning chunk data using incremental kernel principal component analysis

Abstract: An incremental learning algorithm of Kernel Prin cipal Component Analysis (KPCA) called Chunk Incremental KPCA (CIKPCA) has been proposed for online feature extrac tion in pattern recognition. CIKPCA can reduce the number of times to solve the eigenvalue problem compared with the conventional incremental KPCA when a small number of data are simultaneously given as a stream of data chunks. However, our previous work suggests that the computational costs of the independent data selection in CIKPCA could dominate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2013
2013
2015
2015

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…As mentioned before, Takaomi et al [10] found that more time is needed for the large chunk data. Therefore, the objective of this experiment is to measure the influence of the data division and the data selection when a chunk of data is received.…”
Section: Efficiency Of Ikpca With Different Number Of Chunk Sizesmentioning
confidence: 83%
See 2 more Smart Citations
“…As mentioned before, Takaomi et al [10] found that more time is needed for the large chunk data. Therefore, the objective of this experiment is to measure the influence of the data division and the data selection when a chunk of data is received.…”
Section: Efficiency Of Ikpca With Different Number Of Chunk Sizesmentioning
confidence: 83%
“…However, large computation and memory costs are needed to obtain an accumulation ratio if a chunk size is large. Takaomi et al [10] investigated the influence of chunk size to the learning time. They found that more time is required for the large chunk data unless the large chunk data are divided into small chunks.…”
Section: A Data Selectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Up to now, a large number of incremental learning algorithms are investigated extensively [3][4][5][6][7][8]. However, it is interesting to note that most incremental learning algorithms adopt vector model, that is, these algorithms always convert high-dimensional samples into vectors.…”
Section: Introductionmentioning
confidence: 99%