2020 IEEE Radar Conference (RadarConf20) 2020
DOI: 10.1109/radarconf2043947.2020.9266510
|View full text |Cite
|
Sign up to set email alerts
|

Radar-camera Fusion for Road Target Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(11 citation statements)
references
References 11 publications
0
11
0
Order By: Relevance
“…RGB-radar fusion through deep neural network (DNN) processing has already been studied in a number of works, using different fusion strategies. Among those, the authors in [22] proposed a road target classification and tracking system using a 79-GHz FMCW radar and a standard imaging camera. They adopt a late fusion approach, applying object recognition independently to the camera data using a YOLOv3 detector [23], and to the radar data, using a CNN-LSTM network.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…RGB-radar fusion through deep neural network (DNN) processing has already been studied in a number of works, using different fusion strategies. Among those, the authors in [22] proposed a road target classification and tracking system using a 79-GHz FMCW radar and a standard imaging camera. They adopt a late fusion approach, applying object recognition independently to the camera data using a YOLOv3 detector [23], and to the radar data, using a CNN-LSTM network.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, the radar data must be projected from the range-azimuth view to the same perspective domain as the RGB and DVS images. As the drone altitude is not fixed, computing a homography between the radar rangeazimuth plane and the cameras directly as in [22] cannot be done since the radar is not a projective sensor. This means that, as the drone altitude changes (for fixed X, Y location), the radar detection map does not change as the radar cannot distinguish the objects elevation extents.…”
Section: A Input Pre-processingmentioning
confidence: 99%
See 2 more Smart Citations
“…With insufficient accuracy in searching for vehicle objects, radar data is weak in object recognition, and it is difficult to integrate the advantages of the two types of data to improve the overall recognition performance. Aziz K et al [33] proposed a method of using 3D-CNN+LSTM to do MIMO radar data analysis, using YOLO algorithm to implement image object detection, and then using projection transformation to achieve result fusion. Because MIMO radar provides two-dimensional spatial data, it can implement object detection through convolutional neural networks.…”
Section: Related Workmentioning
confidence: 99%