Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing 2019
DOI: 10.1145/3313276.3316406
|View full text |Cite
|
Sign up to set email alerts
|

Parallelizing greedy for submodular set function maximization in matroids and beyond

Abstract: We consider parallel, or low adaptivity, algorithms for submodular function maximization. This line of work was recently initiated by Balkanski and Singer and has already led to several interesting results on the cardinality constraint and explicit packing constraints. An important open problem is the classical setting of matroid constraint, which has been instrumental for developments in submodular function maximization. In this paper we develop a general strategy *

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(27 citation statements)
references
References 45 publications
0
27
0
Order By: Relevance
“…We acknowledge that the query complexity can likely be improved via normalization or estimating an indicator random variable instead. The works of (Chekuri & Quanrud, 2018;Ene et al, 2018b) give constant-factor approximation algorithms with O(log 2 (n)) adaptivity for maximizing nonmonotone submodular functions subject to matroid constraints. Their approaches use multilinear extensions and thus require Ω(nk 2 log 2 (n)) function evaluations to simulate an oracle for ∇f with high enough accuracy.…”
Section: Algorithm Approximiation Adaptivity Queriesmentioning
confidence: 99%
“…We acknowledge that the query complexity can likely be improved via normalization or estimating an indicator random variable instead. The works of (Chekuri & Quanrud, 2018;Ene et al, 2018b) give constant-factor approximation algorithms with O(log 2 (n)) adaptivity for maximizing nonmonotone submodular functions subject to matroid constraints. Their approaches use multilinear extensions and thus require Ω(nk 2 log 2 (n)) function evaluations to simulate an oracle for ∇f with high enough accuracy.…”
Section: Algorithm Approximiation Adaptivity Queriesmentioning
confidence: 99%
“…We need the following Lemma in our analysis. The proof idea follows from [BV14] and it appears in [CQ19,BRS19b]. It has some minor difference with previous work and we provide a proof for completeness.…”
Section: Amplification Via Accelerated Continuous Greedymentioning
confidence: 99%
“…In each iteration, we use the above combinatorial algorithm to find the direction of improvement of the multi-linear extension. We remark a similar amplifying procedure has been used in the previous work on adaptive submodular maximization [CQ19,BRS19b].…”
Section: Technical Overviewmentioning
confidence: 99%
“…Subsequently, Balkanski et al (2019), Ene and Nguyen (2019) and Fahrbach et al (2019) independently designed two 1 − 1/e − ε-approximation algorithms through using different methods and techniques, both requiring the O((log(n))) adaptivity rounds and the O(n) oracle queries in expectation. Later, submodular maximization with matroid constraints Chekuri and Quanrud 2019b), submodular maximization subject to packing constraints Chekuri and Quanrud 2019a) and the submodular cover problem (Agarwal et al 2019) have been studied in the adaptive complexity model.…”
Section: Introductionmentioning
confidence: 99%