2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00316
|View full text |Cite
|
Sign up to set email alerts
|

Rethinking of Radar’s Role: A Camera-Radar Dataset and Systematic Annotator via Coordinate Alignment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 39 publications
(16 citation statements)
references
References 22 publications
0
16
0
Order By: Relevance
“…Ouaknine et al (2021) and Lim et al (2021) with CARRADA and RaDICal both contained significantly more scenes along with a denser range-angle-doppler radar images and associated camera data. Lastly, Wang et al (2021) released a dataset with more complex scenes with a focus on learned object detection. These datasets, each contribute to radar-visual object detection and semantic labeling research.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Ouaknine et al (2021) and Lim et al (2021) with CARRADA and RaDICal both contained significantly more scenes along with a denser range-angle-doppler radar images and associated camera data. Lastly, Wang et al (2021) released a dataset with more complex scenes with a focus on learned object detection. These datasets, each contribute to radar-visual object detection and semantic labeling research.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, many of the datasets focus directly on autonomous vehicle navigation, outside of Leung et al (2017) which has a direct robotic application. This dataset instead targets low cost, lightweight ADAS sensors such as those used in Caesar et al (2020), Lim et al (2021), Ouaknine et al (2021), and Wang et al (2021). Of the datasets that make use of ADAS sensors, only Caesar et al (2020) is focused on navigation, and also primarily uses the radars own onboard signal processing and detection systems instead of streaming raw data.…”
Section: Introductionmentioning
confidence: 99%
“…While they provide accurate range and velocity, they suffer from a low azimuth resolution leading to ambiguity in separating close objects. Recent datasets include processed radar representations such as the entire Range-Azimuth-Doppler (RAD) tensor [31], [43] or single views of this tensor -either Range-Azimuth (RA) [1], [38], [17], [41], [27] or Range-Doppler (RD) [27]. These representations require large bandwidth to be transmitted as well as large memory storage.…”
Section: Radar Backgroundmentioning
confidence: 99%
“…Owing to the promising capabilities of HD radars, our work [24] 2019 Small HD CL 3D Boxes RadarRobotCar [1] 2020 Large S CLO CARRADA [31] 2020 Small LD C Segmentation RADIATE [38] 2020 Medium S CLO 2D Boxes MulRan [17] 2020 Medium S CLO Zendar [27] 2020 Small HD CL 2D Boxes CRUW [41] 2021 Medium LD C Point Location RadarScenes [36] 2021 Large HD CO Point-wise RADDet [43] 2021…”
Section: Introductionmentioning
confidence: 99%
“…Instead, raw data tensors and deep neural networks can be used to replace and improve traditional techniques for object detection, classification and segmentation without losing information. Recently, radar datasets and challenges such as CARRADA [5], RADDet [6] or CRUW [7], where radar data is provided as raw data tensors, have opened up research on new deep learning methods for automotive radar ranging from object detection [6], [8], [9] to object segmentation [10].…”
Section: Introductionmentioning
confidence: 99%