We analyze an online learning algorithm that adaptively combines outputs of two constituent algorithms (or the experts) running in parallel to estimate an unknown desired signal. This online learning algorithm is shown to achieve and in some cases outperform the mean-square error (MSE) performance of the best constituent algorithm in the steady state. However, the MSE analysis of this algorithm in the literature uses approximations and relies on statistical models on the underlying signals. Hence, such an analysis may not be useful or valid for signals generated by various real-life systems that show high degrees of nonstationarity, limit cycles and that are even chaotic in many cases. In this brief, we produce results in an individual sequence manner. In particular, we relate the time-accumulated squared estimation error of this online algorithm at any time over any interval to the one of the optimal convex mixture of the constituent algorithms directly tuned to the underlying signal in a deterministic sense without any statistical assumptions. In this sense, our analysis provides the transient, steady-state, and tracking behavior of this algorithm in a strong sense without any approximations in the derivations or statistical assumptions on the underlying signals such that our results are guaranteed to hold. We illustrate the introduced results through examples.
Abstract-We study how to invest optimally in a financial market having a finite number of assets from a signal processing perspective. Specifically, we investigate how an investor should distribute capital over these assets and when he/she should reallocate the distribution of the funds over these assets to maximize the expected cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are provided in the paper. As predicted from our derivations, we significantly improve the achieved wealth with respect to the portfolio selection algorithms from the literature on historical data sets under both mild and heavy transaction costs.
In this paper, we study the binary classification problem in machine learning and introduce a novel classification algorithm based on the 'Context Tree Weighting Method'. The introduced algorithm incrementally learns a classification model through sequential updates in the course of a given data stream, i.e., each data point is processed only once and forgotten after the classifier is updated, and asymptotically achieves the performance of the best piecewise linear classifiers defined by the 'context tree'. Since the computational complexity is only linear in the depth of the context tree, our algorithm is highly scalable and appropriate for real time processing. We present experimental results on several benchmark data sets and demonstrate that our method provides significant computational improvement both in the test (5 ∼ 35×) and training phases (40 ∼ 1000×), while achieving high classification accuracy in comparison to the SVM with RBF kernel. © 2013 IEEE
In this correspondence, we introduce a minimax regret criteria to the least squares problems with bounded data uncertainties and solve it using semi-definite programming. We investigate a robust minimax least squares approach that minimizes a worst case difference regret. The regret is defined as the difference between a squared data error and the smallest attainable squared data error of a least squares estimator. We then propose a robust regularized least squares approach to the regularized least squares problem under data uncertainties by using a similar framework. We show that both unstructured and structured robust least squares problems and robust regularized least squares problem can be put in certain semi-definite programming forms. Through several simulations, we demonstrate the merits of the proposed algorithms with respect to the the well-known alternatives in the literature.
We investigate adaptive mixture methods that linearly combine outputs of m constituent filters running in parallel to model a desired signal. We use "Bregman divergences" and obtain certain multiplicative updates to train the linear combination weights under an affine constraint or without any constraints. We use unnormalized relative entropy and relative entropy to define two different Bregman divergences that produce an unnormalized exponentiated gradient update and a normalized exponentiated gradient update on the mixture weights, respectively. We then carry out the mean and the mean-square transient analysis of these adaptive algorithms when they are used to combine outputs of m constituent filters. We illustrate the accuracy of our results and demonstrate the effectiveness of these updates for sparse mixture systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.