2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8967739
|View full text |Cite
|
Sign up to set email alerts
|

LiDAR-Flow: Dense Scene Flow Estimation from Sparse LiDAR and Stereo Images

Abstract: We propose a new approach called LiDAR-Flow to robustly estimate a dense scene flow by fusing a sparse LiDAR with stereo images. We take the advantage of the high accuracy of LiDAR to resolve the lack of information in some regions of stereo images due to textureless objects, shadows, ill-conditioned light environment and many more. Additionally, this fusion can overcome the difficulty of matching unstructured 3D points between LiDAR-only scans. Our LiDAR-Flow approach consists of three main steps; each of the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 32 publications
(24 citation statements)
references
References 37 publications
0
24
0
Order By: Relevance
“…[14] extracted point cloud semantic features through CNN, and integrated the semantics into surfel to filter out moving objects, similarly, [18][19] use CNN for semantic segmentation to detect moving objects. [16] proposes a method called LiDAR-flow, which provides robust estimation of dense scene flows by fusing sparse LiDAR with stereo images. [17] and [20] fusing temporal information to detect dynamic objects, they use multi frame point cloud as input to regress the motion behavior of objects on the aerial view through the network, the advantage of this kind of method is to detect all moving objects in the field of vision of LiDAR, including objects not seen in the training set, which is of great significance to the safety of robotics automatic driving.…”
Section: B Lidar Dynamic Object Detectionmentioning
confidence: 99%
“…[14] extracted point cloud semantic features through CNN, and integrated the semantics into surfel to filter out moving objects, similarly, [18][19] use CNN for semantic segmentation to detect moving objects. [16] proposes a method called LiDAR-flow, which provides robust estimation of dense scene flows by fusing sparse LiDAR with stereo images. [17] and [20] fusing temporal information to detect dynamic objects, they use multi frame point cloud as input to regress the motion behavior of objects on the aerial view through the network, the advantage of this kind of method is to detect all moving objects in the field of vision of LiDAR, including objects not seen in the training set, which is of great significance to the safety of robotics automatic driving.…”
Section: B Lidar Dynamic Object Detectionmentioning
confidence: 99%
“…Camera-LiDAR Fusion Cameras and LiDARs have complementary characteristics, facilitating many computer vision tasks, such as depth estimation [13,30,55], scene flow estimation [2,41], 3D object detection [10,27,36,45,51], etc. Some researchers [2,36,45,55] build a modular network and perform result-level fusion, while the others [13,27,30,41,51] explore feature-level fusion schemes including early-fusion and late-fusion. Instead, we propose a multi-stage and bidirectional fusion pipeline, which not only fully utilizes the characteristic of each modality, but maximizes the inter-modality complementarity as well.…”
Section: Related Workmentioning
confidence: 99%
“…RIC-FLow does not use the raw matches directly but generate a superpixel flow from input matching to improve the efficiency of the model estimation. This concept was also adapted for scene flow estimation from sparse LiDAR and RGB image input [15]. Our approach, which we describe in detail in Section III, applies the superpixel method in the context of depth completion.…”
Section: Related Workmentioning
confidence: 99%