In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich (ICML 2003) introduced this framework, which models many natural repeated decision-making problems and generalizes many existing problems such as Prediction from Expert Advice and Cover's Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret O( √ T ), for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight.In this paper, we give algorithms that achieve regret O(log(T )) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth (EuroCOLT 1999), and Universal Portfolios by Cover (Math. Finance 1: [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] 1991). We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement.The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection between the natural follow-the-leader approach and the Newton method. We also analyze other algorithms, which tie together several different previous approaches including follow-the-leader, exponential weighting, Cover's algorithm and gradient descent.
We give O( √ log n)-approximation algorithms for the Min UnCut, Min 2CNF Deletion, Directed Balanced Separator, and Directed Sparsest Cut problems. The previously best known algorithms give an O(log n)-approximation for Min UnCut [9], Directed Balanced Separator [17], Directed Sparsest Cut [17], and an O(log n log log n)-approximation for Min 2CNF Deletion [14].We also show that the integrality gap of an SDP relaxation of the Minimum Multicut problem is Ω(log n).
We experimentally study on-line investment algorithms first proposed by Agarwal and Hazan and extended by Hazan et al. which achieve almost the same wealth as the best constant-rebalanced portfolio determined in hindsight. These algorithms are the first to combine optimal logarithmic regret bounds with efficient deterministic computability. They are based on the Newton method for offline optimization which, unlike previous approaches, exploits second order information. After analyzing the algorithm using the potential function introduced by Agarwal and Hazan, we present extensive experiments on actual financial data. These experiments confirm the theoretical advantage of our algorithms, which yield higher returns and run considerably faster than previous algorithms with optimal regret. Additionally, we perform financial analysis using mean-variance calculations and the Sharpe ratio.
We present improved approximation algorithms for directed multicut and directed sparsest cut. The current best known approximation ratio for these problems is O(n 1/2 ). We obtain anÕ(n 11/23 )-approximation. Our algorithm works with the natural LP relaxation used in prior work. We use a randomized rounding algorithm with a more sophisticated charging scheme and analysis to obtain our improvement. This also implies aÕ(n 11/23 ) upper bound on the ratio between the maximum multicommodity flow and minimum multicut in directed graphs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.