2016 IEEE International Conference on Robotics and Automation (ICRA) 2016
DOI: 10.1109/icra.2016.7487715
|View full text |Cite
|
Sign up to set email alerts
|

Rolling shutter and motion blur removal for depth cameras

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…Sterzentsenko et al [ 16 ] used self-supervision to train a deep autoencoder to combat the lack of real world datasets with noise-free ground truth depths. The work from Tourani et al [ 18 ] deals with the removal of motion artifacts from rolling shutters, which are common in structured sensors such as the Kinect. Li et al [ 19 ] use a two-branched CNN to simultaneously remove motion blur from a color and a depth image.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Sterzentsenko et al [ 16 ] used self-supervision to train a deep autoencoder to combat the lack of real world datasets with noise-free ground truth depths. The work from Tourani et al [ 18 ] deals with the removal of motion artifacts from rolling shutters, which are common in structured sensors such as the Kinect. Li et al [ 19 ] use a two-branched CNN to simultaneously remove motion blur from a color and a depth image.…”
Section: Related Workmentioning
confidence: 99%
“…This fails to incorporate the inherent geometric structure of depth maps. While research on depth-only enhancement exists [ 10 , 11 ], a majority of recent work has focused on some form of intensity or RGB-guided depth enhancement, e.g., for super resolution [ 12 , 13 , 14 , 15 ], denoising [ 16 , 17 ] or motion blur removal [ 18 , 19 ]. While this greatly improves the quality of the resultant depth images, these additional RGB sensors are not always available.…”
Section: Introductionmentioning
confidence: 99%
“…Few works consider the challenging situation that joint RS distortion and blur appear in the images simultane-ously. Tourani et al [36] use feature matches between depth maps to timestamp parametric ego-motion to further achieve RSCD. Their method needs multiple RGBD images as inputs.…”
Section: Joint Correction and Deblurringmentioning
confidence: 99%
“…As many computer vision algorithms such as semantic segmentation, object detection, or visual odometry rely on visual input, blurry images challenge the performance of these algorithms. It is well known that many algorithms (e.g., depth prediction, feature detection, motion estimation, or object recognition) suffer from motion blur [17], [25], [26], [33]. The motion deblurring problem has thus received considerable attention in the past [7], [17], [21], [28], [32].…”
Section: Introductionmentioning
confidence: 99%