2003
DOI: 10.1162/08997660360581958
|View full text |Cite
|
Sign up to set email alerts
|

The Concave-Convex Procedure

Abstract: The concave-convex procedure (CCCP) is a way to construct discrete-time iterative dynamical systems that are guaranteed to decrease global optimization and energy functions monotonically. This procedure can be applied to almost any optimization problem, and many existing algorithms can be interpreted in terms of it. In particular, we prove that all expectation-maximization algorithms and classes of Legendre minimization and variational bounding algorithms can be reexpressed in terms of CCCP. We show that many … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
909
0
1

Year Published

2012
2012
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 1,140 publications
(912 citation statements)
references
References 26 publications
2
909
0
1
Order By: Relevance
“…The presence of hidden variables h in (12) make the overall optimization problem non-convex. The problem in (12) can be expressed as a difference of two convex terms f (w) and g(w), and thus can be solved using the CCCP algorithm [26]. Our learning iterates two steps: (i) Given w, eachŷ (l) , h (l) , and h * (l) can be efficiently estimated using our bottom-up/top-down inference explained in Sec.…”
Section: Max-margin Learningmentioning
confidence: 99%
“…The presence of hidden variables h in (12) make the overall optimization problem non-convex. The problem in (12) can be expressed as a difference of two convex terms f (w) and g(w), and thus can be solved using the CCCP algorithm [26]. Our learning iterates two steps: (i) Given w, eachŷ (l) , h (l) , and h * (l) can be efficiently estimated using our bottom-up/top-down inference explained in Sec.…”
Section: Max-margin Learningmentioning
confidence: 99%
“…In that case, (7) implies (9). If on the other hand (7) is false, all y ∈ N H ℓ (x) are not labeled ℓ + 1.…”
Section: Sub-and Supermodularity Of the General Priormentioning
confidence: 99%
“…We will show that this problem can be approximated efficiently using the submodular-supermodular procedure [8]. This technique was inspired by the concave-convex procedure for continuous functions [9]. Our algorithm changes the underlying graph in a way different from earlier submodular-supermodular techniques [10,11].…”
Section: Introductionmentioning
confidence: 99%
“…A recent local search approach for the linear case is given by Wang et al [146,156]. It is based on applying the so-called constrained concave-convex procedure [133,152], which is an iterative optimization strategy yielding a local optimum for a non-convex optimization problem [134,152]. Wang et al [146] apply this iterative scheme by deriving appropriate quadratic programming problems (with many constraints) to be solved per iteration.…”
Section: Unsupervised Support Vector Machinesmentioning
confidence: 99%