2019
DOI: 10.1109/tcsvt.2018.2832086
|View full text |Cite
|
Sign up to set email alerts
|

Depth Map Estimation Using Defocus and Motion Cues

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 31 publications
0
6
0
Order By: Relevance
“…Nazir et al [20] suggested a deep convolutional neural network to estimate the depth and image deblurring. Kumar et al [21] presented a novel technique to generate a more accurate depth map for dynamic scenes using a combination of defocus and motion cues. The combination was performed by keeping the parameter of the defocus edge points aligned in the motion direction and estimating the camera parameters with the help of motion and defocus relations.…”
Section: Related Workmentioning
confidence: 99%
“…Nazir et al [20] suggested a deep convolutional neural network to estimate the depth and image deblurring. Kumar et al [21] presented a novel technique to generate a more accurate depth map for dynamic scenes using a combination of defocus and motion cues. The combination was performed by keeping the parameter of the defocus edge points aligned in the motion direction and estimating the camera parameters with the help of motion and defocus relations.…”
Section: Related Workmentioning
confidence: 99%
“…[30]- [39] adopted different strategies to obtain reliable depth estimation from a single camera by learning to exploit monocular clues such as shadows, occlusions and relative scales between objects. In this field, a particularly appealing practice consists of training end-to-end models in a self or semisupervised manner [12], [36], replacing the need for groundtruth depth labels with image reprojection across different viewpoints according to two main strategies, respectively acquiring images with a single, moving camera [36], [38], [40], [41] or using a stereo camera [10], [12], [37], [42]- [44].…”
Section: A Depth Estimationmentioning
confidence: 99%
“…With the development of computer vision, depth estimation [14,30,34], defocus estimation [2,32,7] and saliency detection [54,38,43,56] have made significant progress, which also provide more directions for solving bokeh rendering tasks. Many methods [48,37,11,17] utilize these prior knowledge to synthesize bokeh effects.…”
Section: Related Workmentioning
confidence: 99%