Proceedings of the International Joint Conference on Neural Networks, 2003.
DOI: 10.1109/ijcnn.2003.1223991
|View full text |Cite
|
Sign up to set email alerts
|

Svm incremental learning, adaptation and optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
112
0
3

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 189 publications
(115 citation statements)
references
References 2 publications
0
112
0
3
Order By: Relevance
“…In contrast with these existing works, our algorithm implements incremental learning on kd-tree based KNN. The algorithm can bring a lower incremental time consumption in incremental phase than some other algorithms like [15] and [17]. In addition, after borrowing idea from [6], the adaptation of a hybrid incremental learning makes our algorithm in a high level of all the incremental learning algorithms.…”
Section: Background and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In contrast with these existing works, our algorithm implements incremental learning on kd-tree based KNN. The algorithm can bring a lower incremental time consumption in incremental phase than some other algorithms like [15] and [17]. In addition, after borrowing idea from [6], the adaptation of a hybrid incremental learning makes our algorithm in a high level of all the incremental learning algorithms.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Therefore, incremental learning methods has been widely used for image classification. Support vector based incremental learning method such as [15], which uses Karush-Kuhn-Tucker (KKT) conditions, adiabatic increments and bookkeeping method. In [16], the authors have implemented an incremental SVM based on linear kernel.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Thus, we define a neighborhood set centered around C t * as C t * ε , εC t * , where ε > 1. We update both the k-CV training set and validation set and, accordingly, the f t+1 1,...,k by using a regularization parameter optimization technique (Diehl & Cauwenberghs, 2003) for every value of C ∈ C t * ε , εC t * . We thus identify the best regularization parameter configuration C t+1 * .…”
Section: Incremental Model Selection For Svmmentioning
confidence: 99%
“…For this it is necessary either to design a mechanism of online learning or perform an adaptation of the learnt SVM classifier to new environmental acoustics similar to MAP or MLLR adaptation for GMM. Some works on the former approach can be found in the literature [DC03] while the latter is still an open question.…”
Section: Acoustic Event Detection In Real Environmentsmentioning
confidence: 99%