Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468)
DOI: 10.1109/nnsp.1999.788120
|View full text |Cite
|
Sign up to set email alerts
|

Sequential support vector machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(14 citation statements)
references
References 11 publications
0
14
0
Order By: Relevance
“…Minimizing will make a function as flat as possible, thus playing the role of controlling the function capacity. The second term is the empirical error measured by the -insensitive loss function (3). This loss function provides the advantage of using sparse data points to represent the designed function (1).…”
Section: Svm For Regreession Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…Minimizing will make a function as flat as possible, thus playing the role of controlling the function capacity. The second term is the empirical error measured by the -insensitive loss function (3). This loss function provides the advantage of using sparse data points to represent the designed function (1).…”
Section: Svm For Regreession Estimationmentioning
confidence: 99%
“…Therefore, the learning algorithm used should take into account this characteristic. Usually, the information provided by the recent data points is given more weight than that provided by the distant data points [3], [4], as in nonstationary financial time series the recent data points could provide more important information than the distant data points.…”
mentioning
confidence: 99%
“…The best results reported in [11] using SVR and KKF are shown for the purpose of comparison. The results obtained for [3] SSVM using noise estimation scheme explained in previous section is also shown. Different window lengths and kernel functions were tried for SSVM and KARX model and best results are reported.…”
Section: Mackey-glass Time Series Predictionmentioning
confidence: 92%
“…A. K. Suykens [4] also falls into this category of static function mapping. The Sequential support vector machine proposed in [3] uses the Kalman recursions for updating the parameters, which was also used by F. N. Chowdhury [12]. In this paper, we estimate the parameters using similar approach.…”
Section: Related Approachesmentioning
confidence: 99%
“…Therefore, it is advantageous to give more weights on the information provided by the recent data than that of the distant data (Freitas, Milo, & Clarkson, 1999). Enlightened by this fact, an innovative SVM approach called C-ascending SVM was proposed by Tay and Cao (2002), where e-insensitive error is multiplied by the exponential weight function to illustrate the different weights from distant data and recent data in time sequence.…”
Section: The Proposed Methodsmentioning
confidence: 99%