2018
DOI: 10.48550/arxiv.1810.12464
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Differentiable Greedy Networks

Abstract: Optimal selection of a subset of items from a given set is a hard problem that requires combinatorial optimization. In this paper, we propose a subset selection algorithm that is trainable with gradient based methods yet achieves near optimal performance via submodular optimization. We focus on the task of identifying a relevant set of sentences for claim verification in the context of the FEVER task. Conventional methods for this task look at sentences on their individual merit and thus do not optimize the in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(11 citation statements)
references
References 13 publications
0
11
0
Order By: Relevance
“…SMOOTHED GREEDY We develop SMOOTHED GREEDY by stochastically perturbing argmax; this generalizes the existing algorithms [52,46]. We prove that the perturbation does not spoil the original guarantees: almost (1 − 1/e)and 1 κ+1 -approximation guarantees are achieved in expectation for the cases of cardinality and κ-extensible system constraints, respectively, where a subtractive term depending on the perturbation strength affects the guarantees.…”
Section: Introductionmentioning
confidence: 95%
See 4 more Smart Citations
“…SMOOTHED GREEDY We develop SMOOTHED GREEDY by stochastically perturbing argmax; this generalizes the existing algorithms [52,46]. We prove that the perturbation does not spoil the original guarantees: almost (1 − 1/e)and 1 κ+1 -approximation guarantees are achieved in expectation for the cases of cardinality and κ-extensible system constraints, respectively, where a subtractive term depending on the perturbation strength affects the guarantees.…”
Section: Introductionmentioning
confidence: 95%
“…Differentiable greedy submodular maximization is studied in [52,46]. Our work is different from them in terms of theoretical guarantees, differentiation methods, and problem settings as explained above (see, also Appendix A).…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations