2021
DOI: 10.36227/techrxiv.14851077.v5
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Online Learning with Radial Basis Function Networks

Abstract: We investigate the benefits of feature selection, nonlinear modelling and online learning when forecasting financial time series. We combine sequential updating with continual learning, specifically transfer learning. We perform feature representation transfer through clustering algorithms that determine the analytical structure of radial basis function networks we construct. These networks achieve lower mean-square prediction errors than kernel ridge regression models, which arbitrarily use all training vecto… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…It is first with Moody and Darken [24] that we see the formulation of the rbfnet as the combination of an unsupervised learning model (k-means) and a supervised learning model (linear regression). This form of feature representation transfer boosts model learning capacity and, when combined with sequential optimisation, outperforms biased baselines such as the random-walk in multi-horizon returns forecasting [4].…”
Section: The Radial Basis Function Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…It is first with Moody and Darken [24] that we see the formulation of the rbfnet as the combination of an unsupervised learning model (k-means) and a supervised learning model (linear regression). This form of feature representation transfer boosts model learning capacity and, when combined with sequential optimisation, outperforms biased baselines such as the random-walk in multi-horizon returns forecasting [4].…”
Section: The Radial Basis Function Networkmentioning
confidence: 99%
“…We use their caw procedure but combine it with exponentially weighted recursive least-squares to facilitate sequential optimisation in the test set without forward-looking bias. Finally, we utilise the research of Borrageiro et al [4] to make use of online learning rbfnets, as these models retain more remarkable knowledge of the input feature space. They also respond better to regime changes or concept drifts than models that do not use feature representation transfer; for example, from clustering algorithms [4], Gaussian mixture models [6] or echo state networks [5].…”
Section: The Research Experimentsmentioning
confidence: 99%