2018
DOI: 10.1109/lsp.2018.2870342
|View full text |Cite
|
Sign up to set email alerts
|

An Artifact-Type Aware DIBR Method for View Synthesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(24 citation statements)
references
References 14 publications
0
24
0
Order By: Relevance
“…In [23], threshold segmentation is used to extract foreground object and the background layer is compensated. In [24], disocclusion edge pixels are divided into foreground and background based on the depth value. The confidence term and data term in the filling priority calculation are replaced by the depth term and background term.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…In [23], threshold segmentation is used to extract foreground object and the background layer is compensated. In [24], disocclusion edge pixels are divided into foreground and background based on the depth value. The confidence term and data term in the filling priority calculation are replaced by the depth term and background term.…”
Section: Related Workmentioning
confidence: 99%
“…In our experiment, six competitive schemes are selected for comparison, including Criminisi's exemplar-based inpainting method [17], Daribo's inpainting method [19] , Ahn's depthbased inpainting method [14], Kao's synthesis method [20], Zhu's approach [12] and Oliveira's method [24]. Among them, Criminisi's inpainting method and Ahn's inpainting method are implemented based on the codes provided by the authors, and the other methods are implemented based on published papers.…”
Section: Visual Quality Evaluation Of Synthesized Viewmentioning
confidence: 99%
See 2 more Smart Citations
“…A depth-based gray-level distance matching cost computation method is proposed in [19], but some artifacts might be produced due to the depth errors. In order to prevent the interference of foreground texture, some foreground segmentation methods are proposed [20][21][22]. The quality of the synthesized image through these methods is strongly dependent on the accuracy of foreground segmentation.…”
Section: Introductionmentioning
confidence: 99%