2019 IEEE Intelligent Transportation Systems Conference (ITSC) 2019
DOI: 10.1109/itsc.2019.8916973
|View full text |Cite
|
Sign up to set email alerts
|

Integrating State-of-the-Art CNNs for Multi-Sensor 3D Vehicle Detection in Real Autonomous Driving Environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 16 publications
0
12
0
Order By: Relevance
“…Barea et al presented an integrated CNN for multisensory 3D vehicle detection in a real autonomous driving environment [58]. As the title expresses, the proposal of their paper aims to present an outstanding architecture based on the combination of state of the art methods for object detection such as YOLO and Mask R-CNN for 3D segmentation besides a LiDAR point cloud.…”
Section: Image Pre-processing and Instance Segmentationmentioning
confidence: 99%
“…Barea et al presented an integrated CNN for multisensory 3D vehicle detection in a real autonomous driving environment [58]. As the title expresses, the proposal of their paper aims to present an outstanding architecture based on the combination of state of the art methods for object detection such as YOLO and Mask R-CNN for 3D segmentation besides a LiDAR point cloud.…”
Section: Image Pre-processing and Instance Segmentationmentioning
confidence: 99%
“…Barea et al presented an integrated state of art CNNs for multisensory 3D vehicle detection in a real autonomous driving environment [60]. As the title expresses, the proposal of their paper aims to present an outstanding architecture based on the combination of state of the art methods for object detection such as YOLO and Mask R-CNN for 3D segmentation besides of a LiDAR point cloud.…”
Section: Image Pre-processing and Instance Segmentationmentioning
confidence: 99%
“…For a more detailed explanation of the method, we refer the reader to the following reference of the authors [37].…”
Section: Perception Modulementioning
confidence: 99%
“…The outputs of this module are the linear speed and curvature that are sent to the Drive-By-Wire Module. For a more detailed explanation of the method, the reader is referred to [37].…”
Section: Control Modulementioning
confidence: 99%