I. ABSTRACTMillimeter-wave (mmW) radars are being increasingly integrated in commercial vehicles to support new Adaptive Driver Assisted Systems (ADAS) for its ability to provide high accuracy location, velocity, and angle estimates of objects, largely independent of environmental conditions. Such radar sensors not only perform basic functions such as detection and ranging/angular localization, but also provide critical inputs for environmental perception via object recognition and classification. To explore radar-based ADAS applications, we have assembled a lab-scale frequency modulated continuous wave (FMCW) radar test-bed (https://depts.washington.edu/funlab/research) based on Texas Instrument's (TI) automotive chipset family. In this work, we describe the test-bed components and provide a summary of FMCW radar operational principles. To date, we have created a large raw radar dataset for various objects under controlled scenarios. Thereafter, we apply some radar imaging algorithms to the collected dataset, and present some preliminary results that validate its capabilities in terms of object recognition. (a) (b) Fig. 1: (a) FMCW radar test-bed (red board: AWR1642 BOOST; green board: DCA1000 EVM) (b) Vehicle mounted platform for dataset collection II. INTRODUCTIONOver the years, advances in 77GHz RF design with integrated digital CMOS and packaging have enabled low-cost radaron-chip and antenna-on-chip systems [1]. As a result, several vehicular radar vendors are refining their radar chipset solutions for the automotive segment. TI's state-of-art 77GHz FMCW radar chips and corresponding evaluation boards -AWR1443, AWR1642, and AWR1843 -are built with the low-power 45nm RF CMOS process and enable unprecedented levels of integration in an extremely small form factor [2]. Uhnder has also recently unveiled a new, all-digital phase modulated continuous wave (PMCW) radar chip that uses the 28nm RF CMOS process and is capable of synthesizing multiple input multiple output (MIMO) radar capability with 192 virtual receivers, thereby obtaining a finer angular resolution [3]. However, compared to FMCW radars, PMCW radars shift the modulation complexity/precision to the high-speed dataconverters and the DSP. Overall, continual progress in radar chip designs is expected to enable further novel on-platform integration and, consequently lead to enhanced performance in support of ADAS elements such as adaptive cruise control, auto emergency braking, and lane change assistance [1].The above applications fundamentally rely on advanced radar imaging, detection, clustering, tracking, and classification algorithms. Significant research in the context of automotive radar classification has demonstrated its feasibility as a good alternative when optical sensors fail to provide adequate performance.[4] reported that with handcrafted feature extraction from range and Doppler profile, over 90% accuracy can be achieved when using the support vector machine (SVM) algorithm to distinguish cars and pedestrians. Other studies used the short...
Various autonomous or assisted driving strategies have been facilitated through the accurate and reliable perception of the environment around a vehicle. Among the commonly used sensors, radar has usually been considered as a robust and cost-effective solution even in adverse driving scenarios, e.g., weak/strong lighting or bad weather. Instead of considering to fuse the unreliable information from all available sensors, perception from pure radar data becomes a valuable alternative that is worth exploring. However, unlike rich RGB-based images captured by a camera, it is noticeably difficult to extract semantic information from the radar signals. In this paper, we propose a deep radar object detection network, named RODNet, which is cross-supervised by a camera-radar fused algorithm without laborious annotation efforts, to effectively detect objects from the radio frequency (RF) images in real-time. First, the raw signals captured by millimeter-wave radars are transformed to RF images in range-azimuth coordinates. Second, our proposed RODNet takes a sequence of RF images as the input to predict the likelihood of objects in the radar field of view (FoV). Two customized modules are also added to handle multi-chirp information and object relative motion. Instead of using humanlabeled ground truth for training, the proposed RODNet is crosssupervised by a novel 3D localization of detected objects using a camera-radar fusion (CRF) strategy in the training stage. Finally, we propose a method to evaluate the object detection performance of the RODNet. Due to no existing public dataset available for our task, we create a new dataset, named CRUW 1 , which contains synchronized RGB and RF image sequences in various driving scenarios. With intensive experiments, our proposed cross-supervised RODNet achieves 86% average precision and 88% average recall of object detection performance, which shows the robustness to noisy scenarios in various driving conditions.
Millimeter-wave (mmW) radars are being increasingly integrated into commercial vehicles to support new advanced driver-assistance systems (ADAS) by enabling robust and high-performance object detection, localization, as well as recognition-a key component of new environmental perception. In this paper, we propose a novel radar multiple-perspectives convolutional neural network (RAMP-CNN) that extracts location and class of objects based on further processing of the rangevelocity-angle (RVA) heatmap sequences. To bypass the complexity of 4D convolutional neural networks (NN), we propose to combine several lower-dimension NN models within our RAMP-CNN model that nonetheless approaches the performance upperbound with lower complexity. The extensive experiments show that the proposed RAMP-CNN model achieves better average recall (AR) and average precision (AP) than prior works in all testing scenarios (see Table.III). Besides, the RAMP-CNN model is validated to work robustly under the nighttime, which enables low-cost radars as a potential substitute for pure optical sensing under severe conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.