2015 IEEE International Conference on Computer Vision Workshop (ICCVW) 2015
DOI: 10.1109/iccvw.2015.114
|View full text |Cite
|
Sign up to set email alerts
|

Dual Principal Component Pursuit

Abstract: We consider the problem of learning a linear subspace from data corrupted by outliers. Classical approaches are typically designed for the case in which the subspace dimension is small relative to the ambient dimension. Our approach works with a dual representation of the subspace and hence aims to find its orthogonal complement; as such, it is particularly suitable for subspaces whose dimension is close to the ambient dimension (subspaces of high relative dimension). We pose the problem of computing normal ve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
61
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 53 publications
(61 citation statements)
references
References 79 publications
0
61
0
Order By: Relevance
“…Baselines. We compare with 6 other representative methods that are designed for detecting outliers in one or multiple subspaces: CoP [40], OutlierPursuit [55], REAPER [21], DPCP [46], LRR [28] and 1 -thresholding [43]. We also compare with a graph based method: OutRank [34,35].…”
Section: Methodsmentioning
confidence: 99%
“…Baselines. We compare with 6 other representative methods that are designed for detecting outliers in one or multiple subspaces: CoP [40], OutlierPursuit [55], REAPER [21], DPCP [46], LRR [28] and 1 -thresholding [43]. We also compare with a graph based method: OutRank [34,35].…”
Section: Methodsmentioning
confidence: 99%
“…Similarly, the solver of [43] has O(nm 2 + m 3 ) complexity per iteration. In [31], the complement of the column space of L is recovered via a series of linear optimization problems, each obtaining one direction in the complement space. This method is sensitive to structured outliers, particularly linearly dependent outliers, and requires the columns of L not to exhibit a clustering structure, which prevails much of the real world data.…”
Section: Related Workmentioning
confidence: 99%
“…This method is sensitive to structured outliers, particularly linearly dependent outliers, and requires the columns of L not to exhibit a clustering structure, which prevails much of the real world data. Also, the approach presented in [31] requires solving m − r linear optimization problems consecutively resulting in high computational complexity and long run time for high-dimensional data.…”
Section: Related Workmentioning
confidence: 99%
“…In these tests, we do not compare with DPCP [103], since the code provided online is really just an iterative application of a slower version of the FMS algorithm, and generally DPCP is meant for the setting of large d.…”
Section: B Experiments With Rsr On Synthetic and Stylized Datasetsmentioning
confidence: 99%