2022
DOI: 10.48550/arxiv.2202.03051
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Using Partial Monotonicity in Submodular Maximization

Abstract: Over the last two decades, submodular function maximization has been the workhorse of many discrete optimization problems in machine learning applications. Traditionally, the study of submodular functions was based on binary function properties. However, such properties have an inherit weakness, namely, if an algorithm assumes functions that have a particular property, then it provides no guarantee for functions that violate this property, even when the violation is very slight. Therefore, recent works began t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 25 publications
0
7
0
Order By: Relevance
“…To tackle this problem, we reformulate it into a cardinality-constrained set function optimization task. Subsequently, we introduce novel set function properties, (m, m)-partial monotonicity and (γ, γ)-weak submodularity, extending recent notions of partial monotonicity (Mualem and Feldman 2022) and approximate submodularity (Elenberg et al 2018;Harshaw et al 2019;De et al 2021). These properties allow us to design a greedy algorithm GENEX, to compute near-optimal feature subsets, with new approximation guarantee.…”
Section: Discrete Continuous Training Frameworkmentioning
confidence: 99%
“…To tackle this problem, we reformulate it into a cardinality-constrained set function optimization task. Subsequently, we introduce novel set function properties, (m, m)-partial monotonicity and (γ, γ)-weak submodularity, extending recent notions of partial monotonicity (Mualem and Feldman 2022) and approximate submodularity (Elenberg et al 2018;Harshaw et al 2019;De et al 2021). These properties allow us to design a greedy algorithm GENEX, to compute near-optimal feature subsets, with new approximation guarantee.…”
Section: Discrete Continuous Training Frameworkmentioning
confidence: 99%
“…Definition 1 [2] Given any non-negative set function G: 2 N ! R C , the monotonicity ratio of this function is defined as the scalar m G 2 OE0; 1, such that,…”
Section: Preliminarymentioning
confidence: 99%
“…That means that there exist gaps in approximation ratios between the monotonic and non-monotonic submodular optimization problems. For non-negative set functions [2] , there is also a parameter of monotonicity ratio that measures how much of the function is monotonic.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, Iyer (2015) introduced the concept of monotonicity ratio, a continuous version of monotonicity, under the non-adaptive setting. Mualem and Feldman (2022) provides a systematical study about this property.…”
Section: Additional Related Workmentioning
confidence: 99%