2006
DOI: 10.1007/11776420_38
|View full text |Cite
|
Sign up to set email alerts
|

Online Variance Minimization

Abstract: We consider the following type of online variance minimization problem: In every trial t our algorithms get a covariance matrix C t and try to select a parameter vector wsuch that the total variance over a sequence of trials T t=1 (w t−1 ) C t w t−1 is not much larger than the total variance of the best parameter vector u chosen in hindsight. Two parameter spaces in R n are considered-the probability simplex and the unit sphere. The first space is associated with the problem of minimizing risk in stock portfol… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
51
0

Year Published

2006
2006
2016
2016

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 43 publications
(51 citation statements)
references
References 30 publications
0
51
0
Order By: Relevance
“…Such linear losses are special because they are the least convex losses. The main case where such linear losses have been investigated is in connection with the Hedge and the Matrix Hedge algorithm (Freund and Schapire 1997;Warmuth and Kuzmin 2011). However for the latter algorithms the parameter space is one-norm or trace norm bounded, respectively.…”
Section: Technical Contributionsmentioning
confidence: 99%
“…Such linear losses are special because they are the least convex losses. The main case where such linear losses have been investigated is in connection with the Hedge and the Matrix Hedge algorithm (Freund and Schapire 1997;Warmuth and Kuzmin 2011). However for the latter algorithms the parameter space is one-norm or trace norm bounded, respectively.…”
Section: Technical Contributionsmentioning
confidence: 99%
“…As stated in the text, the proof of QIP = PSPACE presented in this chapter makes use of simplifications due to Wu [171]. The specific formulation of the matrix multiplicative weights update method upon which the proof that QIP = PSPACE relies was discovered independently by Warmuth and Kuzmin [164] and Arora and Kale [16]. Readers interested in learning more about the matrix multiplicative weights update method, as well as a bit of its history, are referred to Kale's PhD thesis [104].…”
Section: Chapter Notesmentioning
confidence: 99%
“…The Online PCA Algorithm in (Warmuth & Kuzmin, 2006b) uses this idea and has good worst-case loss bounds. It uses the density matrix update based on matrix logs and exponentials like the one used in (Tsuda et al, 2005;Warmuth & Kuzmin, 2006a;Arora & Kale, 2007) but with the additional constraint that the eigenvalues are upper bounded (capped). Density matrices are symmetric, positive definite matrices of trace one.…”
Section: Online Kernel Pca Algorithmmentioning
confidence: 99%
“…More recently the entropic family of updates has been generalized to the case when the instances are symmetric matrices X and the parameter is a density matrix (Tsuda et al, 2005;Warmuth & Kuzmin, 2006a;Warmuth & Kuzmin, 2006b;Arora & Kale, 2007). The regularization is now the quantum relative entropy for density matrices instead of the regular relative entropy for probability vectors, and the matrix logarithm of the density matrix parameter is essentially a linear combination of the instance matrices.…”
Section: Introductionmentioning
confidence: 99%