2013
DOI: 10.1109/tpami.2012.39
|View full text |Cite
|
Sign up to set email alerts
|

Tensor Completion for Estimating Missing Values in Visual Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
1,590
0
2

Year Published

2013
2013
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 1,731 publications
(1,594 citation statements)
references
References 33 publications
2
1,590
0
2
Order By: Relevance
“…To show the advantage of adding the robust error term E for support detection, we do a comparison with another low-rank texture completion method based on matrix completion [25]. In Figure 3, we first corrupted an input lowrank image with a ground truth support Ω, then we generate a new support Ω…”
Section: Solution Via Linearized Alternating Direction Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…To show the advantage of adding the robust error term E for support detection, we do a comparison with another low-rank texture completion method based on matrix completion [25]. In Figure 3, we first corrupted an input lowrank image with a ground truth support Ω, then we generate a new support Ω…”
Section: Solution Via Linearized Alternating Direction Methodsmentioning
confidence: 99%
“…We use Ω as the input support for our method and the LRTC method [25] and compare their completion results. In the comparison, we set parameters λ = 0.001, α = 0.85 for our algorithm and use DCT as the basis for B 1 and B 2 .…”
Section: Solution Via Linearized Alternating Direction Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, sparse representation [26,25,18] is generated accordingly, which calls for modeling data vectors as a linear combination of a few elements from an overcomplete dictionary. Depending on the sparse reconstruction coefficients, sparse representation has also been used for many matching and classification applications in computer vision domain, such as object tracking [24], object or face recognition [22], image inpainting [20]. In comparison with conventional sparse representation, where the bases in dictionary are selected manually or generated by a dictionary learning model, we propose a large scale dictionary selection model using low rank constraint, which can retain the original property of the data.…”
Section: Related Workmentioning
confidence: 99%
“…Nevertheless, since the rank of a multi-way array is discrete, rank minimization problems are usually hard to solve and sometimes NP hard. Then researchers usually replace rank in the objective function with nuclear norm [14]. It corresponds to the sum of singular values, which can be used as convex envelope of the rank function [15].…”
Section: Introductionmentioning
confidence: 99%