2013 5th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP) 2013
DOI: 10.1109/camsap.2013.6713997
|View full text |Cite
|
Sign up to set email alerts
|

To convexify or not? Regression with clustering penalties on graphs

Abstract: Abstract-We consider minimization problems that are compositions of convex functions of a vector x ∈ R N with submodular set functions of its support (i.e., indices of the non-zero coefficients of x). Such problems are in general difficult for large N due to their combinatorial nature. In this setting, existing approaches rely on "convexifications" of the submodular set function based on the Lovász extension for tractable approximations. In this paper, we first demonstrate that such convexifications can fundam… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2014
2014
2015
2015

Publication Types

Select...
1
1

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(8 citation statements)
references
References 11 publications
0
8
0
Order By: Relevance
“…In the discrete setting, (26) is preserved in an attempt to faithfully encode the discrete model at hand. While such criterion seems cumbersome to solve, it has favorable properties that lead to polynomial solvability, irrespective of its combinatorial nature: there are efficient combinatorial algorithms that provide practical solutions to overcome this bottleneck and are guaranteed to converge in polynomial time [43]; see Section 7.1.…”
Section: The Discrete Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…In the discrete setting, (26) is preserved in an attempt to faithfully encode the discrete model at hand. While such criterion seems cumbersome to solve, it has favorable properties that lead to polynomial solvability, irrespective of its combinatorial nature: there are efficient combinatorial algorithms that provide practical solutions to overcome this bottleneck and are guaranteed to converge in polynomial time [43]; see Section 7.1.…”
Section: The Discrete Modelmentioning
confidence: 99%
“…We can also view the Ising penalty as a cut function: R ISING (S) indeed just counts the number of edges that are cut by the set S. Cut functions with appropriate graphs and weights actually capture a large subset of submodular functions [72,78]. Since R ISING is symmetric, its convexification is just its Lovász extension, as discussed above, which is shown [43] to be the anisotropic discrete Total Variation semi-norm…”
Section: Examplesmentioning
confidence: 99%
See 1 more Smart Citation
“…Here, we aim to efficiently compute numerically good approximations to the MAP estimator. Given our assumptions, the objective function in (1) can be iteratively minimized by the majorizationminimization scheme of Algorithm 1, see also [10]. The main idea is to majorize the continuous part L y (x) + G(x|s) at each iteration by a modular upper bound, and then solve the resulting SFM.…”
Section: A4 the Regularizer On The State Vector R(s) = − Log P(s)mentioning
confidence: 99%
“…However, the presence of the discrete component R(S) in our model renders the optimization difficult. We present an extension of the efficient Majorization-Minimization algorithm introduced in [10] that iteratively maximizes the log-posterior log p(x, s|y), with guaranteed convergence. Our numerical results show that the proposed algorithm can take full advantage of all available prior information on the signal, while for non-truly sparse signals, state-of-the-art methods are capable of leveraging only a part of it.…”
Section: Introductionmentioning
confidence: 99%