2012 Sixth International Conference on Genetic and Evolutionary Computing 2012
DOI: 10.1109/icgec.2012.131
|View full text |Cite
|
Sign up to set email alerts
|

Semi-automatic Depth Map Extraction Method for Stereo Video Conversion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…However, a 2D-to-3D rendering approach that uses only one of these features may not be widely applicable. Therefore, 2D-to-3D rendering approaches that use mixed features have been employed [27][28][29]. The CGDMs calculated by the mixed-features-based method are more stable than that by single-feature-based method.…”
Section: Introductionmentioning
confidence: 99%
“…However, a 2D-to-3D rendering approach that uses only one of these features may not be widely applicable. Therefore, 2D-to-3D rendering approaches that use mixed features have been employed [27][28][29]. The CGDMs calculated by the mixed-features-based method are more stable than that by single-feature-based method.…”
Section: Introductionmentioning
confidence: 99%
“…Vanishing line detection [10][11][12][13] Depth from model Color theory [14,15,23,34] Depth from defocus Use blur information to get depth value [16][17][18][19] Depth from visual saliency Estimation in region of interest [20][21][22] 5. Four kinds of scanning modes to fix the depth map; 6.…”
Section: Overview Of the Proposed Methodsmentioning
confidence: 99%
“…Battiato [10] proposed a method based on the position of the lines and the vanishing points to derive a suitable assignment of depth. In [12] they gave a semi-automatic method aimed to generate stereoscopic views estimating depth information from a single input video frame and to decrease computation resources. Depth from a model is also used to estimate the depth value in [14,15].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For the purpose of stereoscopic 3D conversion, Phan et al [15] proposed the module to alleviate much user input, as only the first frame needs to be marked. As in semiautomatic conversion approach, we had some previous result for static background scene [16]. The main concept for this design is based on the vanishing point detection for depth map realization.…”
Section: Related Workmentioning
confidence: 99%