2016
DOI: 10.1561/9781680831719
|View full text |Cite
|
Sign up to set email alerts
|

Introduction to Online Convex Optimization

Abstract: Lagrangian relaxation and approximate optimization algorithms have received much attention in the last two decades. Typically, the running time of these methods to obtain a ε approximate solution is proportional to 1 ε 2 . Recently, Bienstock and Iyengar, following Nesterov, gave an algorithm for fractional packing linear programs which runs in 1 ε iterations. The latter algorithm requires to solve a convex quadratic program every iteration -an optimization subroutine which dominates the theoretical running ti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
793
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 617 publications
(797 citation statements)
references
References 78 publications
4
793
0
Order By: Relevance
“…To keep the book manageable, and also more accessible, we chose not to dwell on the deep connections to online convex optimization. A modern treatment of this fascinating subject can be found, e.g., in the recent textbook by Hazan (2015). Likewise, we chose not venture into a much more general problem space of reinforcement learning, a subject of many graduate courses and textbooks such as Sutton and Barto (1998) and Szepesvári (2010).…”
Section: Prefacementioning
confidence: 99%
“…To keep the book manageable, and also more accessible, we chose not to dwell on the deep connections to online convex optimization. A modern treatment of this fascinating subject can be found, e.g., in the recent textbook by Hazan (2015). Likewise, we chose not venture into a much more general problem space of reinforcement learning, a subject of many graduate courses and textbooks such as Sutton and Barto (1998) and Szepesvári (2010).…”
Section: Prefacementioning
confidence: 99%
“…en, online convex optimization theory, in references [26][27][28], is applied to calculate coefficients of the proposed prediction model. Secondly, an enhanced learning system is built to optimize the portfolio by maximizing future wealth with a kernel-based increasing factor.…”
Section: Introductionmentioning
confidence: 99%
“…One main drawback of such methods is that all pairwise distances between genomes are implicitely assumed available, which, due to the computational burden of estimating the alignments, is a very complicated issue that hinders the wider application of such refined methods. On the other hand, Laplacian eigenmap computation being as simple to perform as the PCA, online approaches [11] that only need a small proportion of the pairwise distances have a great potential for overcoming these computational issues. Such online algorithms progressively estimate the principal eigenvectors without having to wait for the full matrix to be known.…”
Section: Introductionmentioning
confidence: 99%