2010
DOI: 10.1889/1.3500584
|View full text |Cite
|
Sign up to set email alerts
|

51.3: An Ultra‐Low‐Cost 2‐D/3‐D Video‐Conversion System

Abstract: In this paper, we propose an ultra low cost 2D‐to‐3D conversion system which generates depth maps using the human visual perception characteristics of luminance and color. The proposed method has two major parts, edge feature‐based global scene depth gradient and texture‐based local depth refinement. The near‐to‐far global scene depth is generated by analyzing the edge complexity on each row. Then the local pixel value, Y, Cb, and Cr component of the video content is used to refine the detail depth value by en… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 8 publications
0
12
0
Order By: Relevance
“…For the performance evaluation of the proposed approach, we used Cheng's methods [9], [10] as benchmark methods. They are the minimum spanning tree (MST) segmentationbased depth map generation method [10] and the YCbCr color channel-based depth map generation method [9].…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…For the performance evaluation of the proposed approach, we used Cheng's methods [9], [10] as benchmark methods. They are the minimum spanning tree (MST) segmentationbased depth map generation method [10] and the YCbCr color channel-based depth map generation method [9].…”
Section: Resultsmentioning
confidence: 99%
“…They are the minimum spanning tree (MST) segmentationbased depth map generation method [10] and the YCbCr color channel-based depth map generation method [9]. For converting to 3D images, the same DIBR algorithm [2] was applied to all methods.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Consequently, depth information is a key component for 3D reconstruction. Therefore, a number of 2D-to-3D conversion systems [1]- [6], which automatically estimate the depth information from monoscopic videos, have been proposed. Compared to existing methods, we integrate some depth cues, like motion and edge, by mimicking the depth perception of human visual system.…”
Section: Introductionmentioning
confidence: 99%