To overcome the disadvantages of RFID application for outdoor vehicle positioning in completely GPS-denied environment, a fusion vehicle positioning strategy based on the integration of RFID and in-vehicle sensors is proposed. To obtain higher performance, both preliminary and fusion positioning algorithms are studied. First, the algorithm for preliminary positioning is developed in which only RFID is adopted. In the algorithm, through using the received signal strength, range from RFID tags to the reader is estimated by implementing the extreme learning machine algorithm, and then, the first-level adaptive extended Kalman filter (AEKF) which can accommodate the uncertainties in the observation noise description of RFID is employed to compute the vehicle’s location. Further, to compensate the deficiencies of preliminary positioning, the in-vehicle sensors are introduced to fuse with RFID. The second-level adaptive decentralized information filtering (ADIF) is designed to achieve fusion. In the implementation process of ADIF, the improved vehicle motion model is established to accurately describe the motion of the vehicle. To isolate the RFID failure and fuse multiple observation sources with different sample rates, instead of the centralized EKF, the decentralized architecture is employed. Meanwhile, the adaptive rule is designed to judge the effectiveness of preliminary positioning result, deciding whether to exclude RFID observations. Finally, the proposed strategy is verified through field tests. The results validate that the proposed strategy has higher accuracy and reliability than traditional methods.
Three-dimensional object detection can provide precise positions of objects, which can be beneficial to many robotics applications, such as self-driving cars, housekeeping robots, and autonomous navigation. In this work, we focus on accurate object detection in 3D point clouds and propose a new detection pipeline called scale-aware attention-based PillarsNet (SAPN). SAPN is a one-stage 3D object detection approach similar to PointPillar. However, SAPN achieves better performance than PointPillar by introducing the following strategies. First, we extract multiresolution pillar-level features from the point clouds to make the detection approach more scale-aware. Second, a spatial-attention mechanism is used to highlight the object activations in the feature maps, which can improve detection performance. Finally, SE-attention is employed to reweight the features fed into the detection head, which performs 3D object detection in a multitask learning manner. Experiments on the KITTI benchmark show that SAPN achieved similar or better performance compared with several state-of-the-art LiDAR-based 3D detection methods. The ablation study reveals the effectiveness of each proposed strategy. Furthermore, strategies used in this work can be embedded easily into other LiDAR-based 3D detection approaches, which improve their detection performance with slight modifications.
The robustness and stability of lane detection is vital for advanced driver assistance vehicle technology and even autonomous driving technology. To meet the challenges of real-time lane detection in complex traffic scenes, a simple but robust multilane detection method is proposed in this paper. The proposed method breaks down the lane detection task into two stages, that is, lane line detection algorithm based on instance segmentation and lane modeling algorithm based on adaptive perspective transform. Firstly, the lane line detection algorithm based on instance segmentation is decomposed into two tasks, and a multitask network based on MobileNet is designed. This algorithm includes two parts: lane line semantic segmentation branch and lane line Id embedding branch. The lane line semantic segmentation branch is mainly used to obtain the segmentation results of lane pixels and reconstruct the lane line binary image. The lane line Id embedding branch mainly determines which pixels belong to the same lane line, thereby classifying different lane lines into different categories and then clustering these different categories. Secondly, the adaptive perspective transformation model is adopted. In this model, the motion information is used to accurately convert the original image into a bird’s-eye view image, and then the least-squares second-order polynomial fitting is performed on the lane line pixels. Finally, experiments on the CULane dataset show that the proposed method achieved similar or better performance compared with several state-of-the-art methods, the F1 score of the proposed method in the normal test set and most challenge test sets is better than other algorithms, which verifies the effectiveness of the proposed method, and then the field experiments results show that the proposed method has good practical application value in various complex traffic scenes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.