2005
DOI: 10.1007/s10208-004-0160-z
|View full text |Cite
|
Sign up to set email alerts
|

Online Learning Algorithms

Abstract: In this paper, we study an online learning algorithm in Reproducing Kernel Hilbert Spaces (RKHSs) and general Hilbert spaces. We present a general form of the stochastic gradient method to minimize a quadratic potential function by an independent identically distributed (i.i.d.) sample sequence, and show a probabilistic upper bound for its convergence.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
87
0

Year Published

2006
2006
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 118 publications
(87 citation statements)
references
References 25 publications
0
87
0
Order By: Relevance
“…[15,18,23,25,29,37,39]). These developments, motivated by complexity theory in continuous optimization [26], concerned the convergence properties of SA-type methods during a finite number of iterations.…”
mentioning
confidence: 99%
“…[15,18,23,25,29,37,39]). These developments, motivated by complexity theory in continuous optimization [26], concerned the convergence properties of SA-type methods during a finite number of iterations.…”
mentioning
confidence: 99%
“…Expression (1) corresponds to the classical principle of minimum disturbance, and the constraint (2) sets to zero the a posteriori error. Though a large class of adaptive filtering techniques can be used here, we restrict ourselves to a gradient descent approach as studied in [10] and we consider the updating step…”
Section: Online Learning With Kernelsmentioning
confidence: 99%
“…This reduces the model order from m to m − 1. The Gram matrix K m−1 is obtained from K m by considering expression (10), where the latter matrix is arranged in order that its last column and row have entries relative to x n . Using the notation…”
Section: Incremental and Decremental Stepsmentioning
confidence: 99%
“…Choice for the parameters of differential privacy and utilities analysis are presented for algorithms such as implicit gradient descent and generalized infinitesimal gradient descent. In this paper, the line of work begins with [8] is considered which can be thought as a kernel online learning algorithm.…”
Section: Introductionmentioning
confidence: 99%