2015
DOI: 10.1109/tsp.2015.2457396
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing Sparse Dictionaries for Online Learning With Kernels

Abstract: Many signal processing and machine learning methods share essentially the same linear-in-the-parameter model, with as many parameters as available samples as in kernel-based machines. Sparse approximation is essential in many disciplines, with new challenges emerging in online learning with kernels. To this end, several sparsity measures have been proposed in the literature to quantify sparse dictionaries and constructing relevant ones, the most prolific ones being the distance, the approximation, the coherenc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0
2

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
2
1

Relationship

4
4

Authors

Journals

citations
Cited by 35 publications
(20 citation statements)
references
References 48 publications
0
18
0
2
Order By: Relevance
“…To solve problem (7), Alternating Direction Method of Multiplies (ADMM) [13] is used. In ADMM form, the problem (7) can be reformulated as (8), and the resulting iterations as (9), (10) and (11): (11) In (9), I is an identity matrix, and (1) T c ρ ++ KKI is always invertible, since 0 ρ > . S in (10) is the soft thresholding operator, and is defined as (12):…”
Section: Incremental Learning Methods For L1 Regularized Kernel Machinementioning
confidence: 99%
See 2 more Smart Citations
“…To solve problem (7), Alternating Direction Method of Multiplies (ADMM) [13] is used. In ADMM form, the problem (7) can be reformulated as (8), and the resulting iterations as (9), (10) and (11): (11) In (9), I is an identity matrix, and (1) T c ρ ++ KKI is always invertible, since 0 ρ > . S in (10) is the soft thresholding operator, and is defined as (12):…”
Section: Incremental Learning Methods For L1 Regularized Kernel Machinementioning
confidence: 99%
“…Most similar to the research presented here are for SVM [6][7][8][9][10][11]. However, the sparse characteristic of SVM is determined by Hinge loss function, the support vectors will gradually increase with the sequentially executing of incremental learning method on training samples, so it will rapidly increase the computational cost and the need for larger memory space on subsequent training samples.…”
Section: Introductionmentioning
confidence: 93%
See 1 more Smart Citation
“…To measure the quality of the dictionaries, we consider the coherence (correlation measured with the inner product) between the atoms of each dictionary, thus measuring how much two atoms in the dictionary are similar. This fundamental information allows to define more powerful measures, such as the coherence and Babel function [41], [42]. The coherence measure of a given dictionary, defined by the maximum absolute inner product between two distinct atoms, provides strong insights on the capacity of the dictionary to recover sparse signals.…”
Section: Large-scale (Global) Dictionary Learningmentioning
confidence: 99%
“…where the set D i = {s j } Ni is known as dictionary at time i and the vectors s j as centres. Sparsification criteria for KAFs ensure finite-sized dictionary when the signal lies on a compact set [17], however, when the signal is monotonically increasing these criteria keep including more centres in the dictionary, thus resulting in higher computational complexity due to having dictionary members that do not necessarily improve predictions.…”
Section: B Sparsification Criteriamentioning
confidence: 99%