2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton) 2019
DOI: 10.1109/allerton.2019.8919697
|View full text |Cite
|
Sign up to set email alerts
|

Global Optimality Guarantees for Nonconvex Unsupervised Video Segmentation

Abstract: In this paper, we consider the problem of unsupervised video object segmentation via background subtraction. Specifically, we pose the nonsemantic extraction of a video's moving objects as a nonconvex optimization problem via a sum of sparse and low-rank matrices. The resulting formulation, a nonnegative variant of robust principal component analysis, is more computationally tractable than its commonly employed convex relaxation, although not generally solvable to global optimality. In spite of this limitation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 44 publications
0
3
0
Order By: Relevance
“…In this sense, we can view finite-state controllers as belonging to the class of model-based, or planning, approaches such as Bayesian POMDPs. Under some conditions, policy gradient methods can attain global maxima, whereas in general, they are only guaranteed to reach local maxima ( 37 ). What can be done when the agent does not know how the environment interacts with itself through the sensorimotor interface?…”
Section: Resultsmentioning
confidence: 99%
“…In this sense, we can view finite-state controllers as belonging to the class of model-based, or planning, approaches such as Bayesian POMDPs. Under some conditions, policy gradient methods can attain global maxima, whereas in general, they are only guaranteed to reach local maxima ( 37 ). What can be done when the agent does not know how the environment interacts with itself through the sensorimotor interface?…”
Section: Resultsmentioning
confidence: 99%
“…While the exact recovery of a low-rank matrix via convex optimization is well understood [7,9], its non-convex counterpart inf (X,Y )∈R m×r ×R n×r XY T − M 1 (1) remains elusive, where M ∈ R m×n and A 1 := m i=1 n j=1 |A ij | for any A ∈ R m×n . Note that minimizing the Frobenius norm squared instead yields approximate recovery and is better understood [2,32,14].…”
Section: Introductionmentioning
confidence: 99%
“…At best, this would imply convergence guarantees for local search algorithms when initialized in a neighborhood of the global minima. In order to prove convergence to a global minimum from any random initial point, as observed in [21,25,15,1], it is necessary to analyze the landscape. We do so in the rank-one case and obtain the following theorem.…”
Section: Introductionmentioning
confidence: 99%