Abstract:We investigate the benefits of feature selection, nonlinear modelling and online learning when forecasting financial time series. We combine sequential updating with continual learning, specifically transfer learning. We perform feature representation transfer through clustering algorithms that determine the analytical structure of radial basis function networks we construct. These networks achieve lower mean-square prediction errors than kernel ridge regression models, which arbitrarily use all training vecto… Show more
“…It is first with Moody and Darken [24] that we see the formulation of the rbfnet as the combination of an unsupervised learning model (k-means) and a supervised learning model (linear regression). This form of feature representation transfer boosts model learning capacity and, when combined with sequential optimisation, outperforms biased baselines such as the random-walk in multi-horizon returns forecasting [4].…”
Section: The Radial Basis Function Networkmentioning
confidence: 99%
“…We use their caw procedure but combine it with exponentially weighted recursive least-squares to facilitate sequential optimisation in the test set without forward-looking bias. Finally, we utilise the research of Borrageiro et al [4] to make use of online learning rbfnets, as these models retain more remarkable knowledge of the input feature space. They also respond better to regime changes or concept drifts than models that do not use feature representation transfer; for example, from clustering algorithms [4], Gaussian mixture models [6] or echo state networks [5].…”
Figure 1: The radial basis function network forecasts are fed into the curds and whey multivariate regression model, whose output is ranked and selected by the naive Bayes asset ranker.
“…It is first with Moody and Darken [24] that we see the formulation of the rbfnet as the combination of an unsupervised learning model (k-means) and a supervised learning model (linear regression). This form of feature representation transfer boosts model learning capacity and, when combined with sequential optimisation, outperforms biased baselines such as the random-walk in multi-horizon returns forecasting [4].…”
Section: The Radial Basis Function Networkmentioning
confidence: 99%
“…We use their caw procedure but combine it with exponentially weighted recursive least-squares to facilitate sequential optimisation in the test set without forward-looking bias. Finally, we utilise the research of Borrageiro et al [4] to make use of online learning rbfnets, as these models retain more remarkable knowledge of the input feature space. They also respond better to regime changes or concept drifts than models that do not use feature representation transfer; for example, from clustering algorithms [4], Gaussian mixture models [6] or echo state networks [5].…”
Figure 1: The radial basis function network forecasts are fed into the curds and whey multivariate regression model, whose output is ranked and selected by the naive Bayes asset ranker.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.