2015
DOI: 10.1109/tnnls.2014.2346832
|View full text |Cite
|
Sign up to set email alerts
|

A Deterministic Analysis of an Online Convex Mixture of Experts Algorithm

Abstract: We analyze an online learning algorithm that adaptively combines outputs of two constituent algorithms (or the experts) running in parallel to estimate an unknown desired signal. This online learning algorithm is shown to achieve and in some cases outperform the mean-square error (MSE) performance of the best constituent algorithm in the steady state. However, the MSE analysis of this algorithm in the literature uses approximations and relies on statistical models on the underlying signals. Hence, such an anal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(17 citation statements)
references
References 22 publications
0
17
0
Order By: Relevance
“…Over the past years, the global optimization problem has gathered significant attention with various algorithms being proposed in distinct fields of research. It has been studied especially in the fields of non-convex optimization [6]- [8], Bayesian optimization [9], convex optimization [10]- [12], bandit optimization [13], stochastic optimization [14], [15]; because of its practical applications in distribution estimation [16]- [19], multi-armed bandits [20]- [22], control theory [23], signal processing [24], game theory [25], prediction [26], [27], decision theory [28] and anomaly detection [29]- [31].…”
Section: A Motivationmentioning
confidence: 99%
“…Over the past years, the global optimization problem has gathered significant attention with various algorithms being proposed in distinct fields of research. It has been studied especially in the fields of non-convex optimization [6]- [8], Bayesian optimization [9], convex optimization [10]- [12], bandit optimization [13], stochastic optimization [14], [15]; because of its practical applications in distribution estimation [16]- [19], multi-armed bandits [20]- [22], control theory [23], signal processing [24], game theory [25], prediction [26], [27], decision theory [28] and anomaly detection [29]- [31].…”
Section: A Motivationmentioning
confidence: 99%
“…In the problems of learning, recognition, estimation or prediction [1]- [3]; decisions are often produced to minimize certain loss functions using features of the observations, which are generally noisy, random or even missing. There are numerous applications in a number of varying fields such as decision theory [4], control theory [5], game theory [6], [7], optimization [8], [9], density estimation and anomaly detection [10]- [15], scheduling [16], signal processing [17], [18], forecasting [19], [20] and bandits [21]- [23]. These decisions are acquired from specific learning models, where the goal is to distinguish certain data patterns and provide accurate estimations for practical use.…”
Section: A Calibrationmentioning
confidence: 99%
“…Then, the subtask models are learnt in parallel on the data from different working conditions by the same or diverse learning algorithms. The prediction of a query sample (only with input variables) by an OMM is the output from the subtask model that the query sample belongs to [29]- [31].…”
Section: The Oline Mixture Model Of Mach Number a The Mixture Learning For The Imbalanced Datamentioning
confidence: 99%