2020
DOI: 10.48550/arxiv.2003.02513
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Simple and Fast Algorithm for Binary Integer and Online Linear Programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(14 citation statements)
references
References 0 publications
0
14
0
Order By: Relevance
“…Notably, the problem has been studied under either the stochastic input model where the coefficient in the objective function, together with the corresponding column in the constraint matrix is drawn from an unknown distribution P, or the random permutation model where they arrive in a random order. As noted in the paper (Li et al, 2020), the random permutation model exhibits similar concentration behavior as the stochastic input model.…”
Section: Other Related Literaturementioning
confidence: 63%
See 2 more Smart Citations
“…Notably, the problem has been studied under either the stochastic input model where the coefficient in the objective function, together with the corresponding column in the constraint matrix is drawn from an unknown distribution P, or the random permutation model where they arrive in a random order. As noted in the paper (Li et al, 2020), the random permutation model exhibits similar concentration behavior as the stochastic input model.…”
Section: Other Related Literaturementioning
confidence: 63%
“…Our algorithm is motivated by the traditional online gradient descent (OGD) algorithm (Hazan, 2016). The OGD algorithm applies a linear update rule according to the gradient information at the current period and has been shown to work well in the stationary setting, even when the distribution is unknown (Lu et al, 2020;Sun et al, 2020;Li et al, 2020). However, the update in OGD only involves historical information and for the non-stationary setting, we have to incorporate the prior estimates of the future time periods.…”
Section: Main Results and Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…In the setting of stochastic input models with linear reward functions it is possible to guarantee an optimal order of regret of O(T 1/2 ) [3,21]. More recent works by Li et al [34] and Balseiro et al [11] propose simple algorithms with O(T 1/2 ) regret guarantees that, in contrast with previous works, do not require periodically solving large linear programs. Finally, under some specific structural assumptions, it is possible to obtain a regret bound of order O(log T ) [33,32].…”
Section: A Related Workmentioning
confidence: 99%
“…[4,6,8,9]. Some recent works study the efficiency on converge [2,3,13], extension versions for more general setting [1,11,12], and more complicated user models [7,17]. Different from online matching, online convex optimization involves learning frameworks and has been studied in theory and practice [10].…”
Section: Related Workmentioning
confidence: 99%