Detecting vehicles with strong robustness and high efficiency has become one of the key capabilities of fully autonomous driving cars. This topic has already been widely studied by GPU-accelerated deep learning approaches using image sensors and 3D LiDAR, however, few studies seek to address it with a horizontally mounted 2D laser scanner. 2D laser scanner is equipped on almost every autonomous vehicle for its superiorities in the field of view, lighting invariance, high accuracy and relatively low price. In this paper, we propose a highly efficient search-based L-Shape fitting algorithm for detecting positions and orientations of vehicles with a 2D laser scanner. Differing from the approach to formulating L-Shape fitting as a complex optimization problem, our method decomposes the L-Shape fitting into two steps: L-Shape vertexes searching and L-Shape corner localization. Our approach is computationally efficient due to its minimized complexity. In on-road experiments, our approach is capable of adapting to various circumstances with high efficiency and robustness.
Detecting and locating surrounding vehicles robustly and efficiently are essential capabilities for autonomous vehicles. Existing solutions often rely on vision-based methods or 3D LiDAR-based methods. These methods are either too expensive in both sensor pricing (3D LiDAR) and computation (camera and 3D LiDAR) or less robust in resisting harsh environment changes (camera). In this work, we revisit the LiDAR based approaches for vehicle detection with a less expensive 2D LiDAR by utilizing modern deep learning approaches. We aim at filling in the gap as few previous works conclude an efficient and robust vehicle detection solution in a deep learning way in 2D. To this end, we propose a learning based method with the input of pseudo-images, named Cascade Pyramid Region Proposal Convolution Neural Network (Cascade Pyramid RCNN), and a hybrid learning method with the input of sparse points, named Hybrid Resnet Lite. Experiments are conducted with our newly 2D LiDAR vehicle dataset recorded in complex traffic environments. Results demonstrate that the Cascade Pyramid RCNN outperforms state-of-the-art methods in accuracy while the proposed Hybrid Resnet Lite provides superior performance of the speed and lightweight model by hybridizing learning based and non-learning based modules. As few previous works conclude an efficient and robust vehicle detection solution with 2D LiDAR, our research fills in this gap and illustrates that even with limited sensing source from a 2D LiDAR, detecting obstacles like vehicles efficiently and robustly is still achievable.
The hand gesture recognition system is a noncontact and intuitive communication approach, which, in turn, allows for natural and efficient interaction. This work focuses on developing a novel and robust gesture recognition system, which is insensitive to environmental illumination and background variation. In the field of gesture recognition, standard vision sensors, such as CMOS cameras, are widely used as the sensing devices in state-ofthe-art hand gesture recognition systems. However, such cameras depend on environmental constraints, such as lighting variability and the cluttered background, which significantly deteriorates their performances. In this work, we propose an event-based gesture recognition system to overcome the detriment constraints and enhance the robustness of the recognition performance. Our system relies on a biologically inspired neuromorphic vision sensor that has microsecond temporal resolution, high dynamic range, and low latency. The sensor output is a sequence of asynchronous events instead of discrete frames. To interpret the visual data, we utilize a wearable glove as an interaction device with five high-frequency (>100 Hz) active LED markers (ALMs), representing fingers and palm, which are tracked precisely in the temporal domain using a restricted spatiotemporal particle filter algorithm. The latency of the sensing pipeline is negligible Manuscript
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.