2020
DOI: 10.1109/msp.2020.3003836
|View full text |Cite
|
Sign up to set email alerts
|

Submodularity in Action: From Machine Learning to Signal Processing Applications

Abstract: ubmodularity is a discrete domain functional property that can be interpreted as mimicking the role of well-known convexity/concavity properties in the continuous domain. Submodular functions exhibit strong structure that lead to efficient optimization algorithms with provable near-optimality guarantees. These characteristics, namely, efficiency and provable performance bounds, are of particular interest for signal processing (SP) and machine learning (ML) practitioners, as a variety of discrete optimization p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
20
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 33 publications
(20 citation statements)
references
References 21 publications
0
20
0
Order By: Relevance
“…Submodular functions. Submodular functions (Tohidi et al, 2020;Bach, 2011;2019) have been widely used for data subset selection as they naturally model properties like coverage, representation, diversity, etc.. Given a ground-set of n data points…”
Section: Submodular Mutual Informationmentioning
confidence: 99%
“…Submodular functions. Submodular functions (Tohidi et al, 2020;Bach, 2011;2019) have been widely used for data subset selection as they naturally model properties like coverage, representation, diversity, etc.. Given a ground-set of n data points…”
Section: Submodular Mutual Informationmentioning
confidence: 99%
“…As a result, we can directly apply the greedy heuristic to problem (P2) by adding to the sampling set the residual node n ∈ Si = V \ S i that minimizes the MSD. Alternatively, other criteria used in experimental design that exhibit amenable properties for greedy selection (e.g., submodularity [20]), such as the (pseudo) log-determinant criterion…”
Section: B Node Samplingmentioning
confidence: 99%
“…This is because the support estimated from (P Si,0 ) may change between iterations; especially in the earlier ones. However, when the number of samples becomes large enough and the support does not change, using submodular functions in (P2) may come with near-optimal guarantees [20]. A deeper analysis of the latter will be done in future work.…”
mentioning
confidence: 99%
“…Model based equalizers usually requires a good understanding of the transmission systems attributions and a rich expert knowledge in design but often lack of generalization ability [9]. Facing this challenge with the emergence of artificial intelligence technology [10], several learning-based equalizers have been proposed that are more adaptable over varied channel conditions. Most of the learning-based equalizers are nonlinear equalizers.…”
Section: Introductionmentioning
confidence: 99%