2022
DOI: 10.1137/21m1405551
|View full text |Cite
|
Sign up to set email alerts
|

Tracking and Regret Bounds for Online Zeroth-Order Euclidean and Riemannian Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 32 publications
0
8
0
Order By: Relevance
“…In the full information setting, the OSM algorithm by Harvey, Liaw, and Soma (2020) has a slightly tighter α-regret than us, but also a much higher computational complexity. We attain the same or better regret than DR-S (Chen, Hassani, and Karbasi 2018;Zhang et al 2019Zhang et al , 2022 and remaining algorithms that either operate on restricted constraint sets (Niazadeh et al 2021;Matsuoka, Ito, and Ohsaka 2021;Streeter, Golovin, and Krause 2009) or on the much more restrictive LWD class (Kakade, Kalai, and Ligett 2007). Most importantly, our work generalizes to the dynamic and optimistic settings.…”
Section: Related Workmentioning
confidence: 65%
See 3 more Smart Citations
“…In the full information setting, the OSM algorithm by Harvey, Liaw, and Soma (2020) has a slightly tighter α-regret than us, but also a much higher computational complexity. We attain the same or better regret than DR-S (Chen, Hassani, and Karbasi 2018;Zhang et al 2019Zhang et al , 2022 and remaining algorithms that either operate on restricted constraint sets (Niazadeh et al 2021;Matsuoka, Ito, and Ohsaka 2021;Streeter, Golovin, and Krause 2009) or on the much more restrictive LWD class (Kakade, Kalai, and Ligett 2007). Most importantly, our work generalizes to the dynamic and optimistic settings.…”
Section: Related Workmentioning
confidence: 65%
“…OSM via Regret Minimization. Several online algorithms have been proposed for maximizing general submodular functions (Niazadeh et al 2021;Harvey, Liaw, and Soma 2020;Matsuoka, Ito, and Ohsaka 2021;Streeter, Golovin, and Krause 2009) under different matroid constraints. There has also been recent work (Chen, Hassani, and Karbasi 2018;Zhang et al 2019Zhang et al , 2022 on the online maximization of continuous DR-submodular functions (Bach 2019).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Aside from the Euclidean space studied for DOL in the literature, Remannian manifolds, as a generalization of Euclidean spaces, have long been an intriguing topic in deep learning and centralized/decentralized optimization, possessing a large number of applications such as in principal component analysis (PCA), independent component analysis (ICA), radar signal processing, dictionary learning, and mixture modeling [144]. However, the study of DOL on Remannian manifolds is still missing, only having a few works on the centralized setup [145], [146], thereby posing the necessity of addressing this case in future. 10) The Case with Switching Cost.…”
Section: Future Directionsmentioning
confidence: 99%