With the continuous development of artificial intelligence and computer vision technology, autonomous vehicles have developed rapidly. Although self-driving vehicles have achieved good results in normal environments, driving in adverse weather can still pose a challenge to driving safety. To improve the detection ability of self-driving vehicles in harsh environments, we first construct a new color levels offset compensation model to perform adaptive color levels correction on images, which can effectively improve the clarity of targets in adverse weather and facilitate the detection and recognition of targets. Then, we compare several common one-stage target detection algorithms and improve on the best-performing YOLOv5 algorithm. We optimize the parameters of the Backbone of the YOLOv5 algorithm by increasing the number of model parameters and incorporating the Transformer and CBAM into the YOLOv5 algorithm. At the same time, we use the loss function of EIOU to replace the loss function of the original CIOU. Finally, through the ablation experiment comparison, the improved algorithm improves the detection rate of the targets, with the mAP reaching 94.7% and the FPS being 199.86.
To facilitate the development of intelligent unmanned loaders and improve the recognition accuracy of loaders in complex scenes, we propose a construction machinery and material target detection algorithm incorporating an attention mechanism (AM) to improve YOLOv4-Tiny. First, to ensure the robustness of the proposed algorithm, we adopt style migration and sliding window segmentation to increase the underlying dataset’s diversity. Second, to address the problem that YOLOv4-Tiny’s (the base network) framework only adopts a layer-by-layer connection form, which demonstrates an insufficient feature extraction ability, we adopt a multilayer cascaded residual module to deeply connect low- and high-level information. Finally, to filter redundant feature information and make the proposed algorithm focus more on important feature information, a channel AM is added to the base network to perform a secondary screening of feature information in the region of interest, which effectively improves the detection accuracy. In addition, to achieve small-scale object detection, a multiscale feature pyramid network structure is employed in the prediction module of the proposed algorithm to output two prediction networks with different scale sizes. The experimental results show that, compared with the traditional network structure, the proposed algorithm fully incorporates the advantages of residual networks and AM, which effectively improves its feature extraction ability and recognition accuracy of targets at different scales. The final proposed algorithm exhibits the features of high recognition accuracy and fast recognition speed, with mean average precision and detection speed reaching 96.82% and 134.4 fps, respectively.
In order to improve the recognition accuracy of construction machinery and equipment and materials in low contrast scenes, a construction machinery material recognition algorithm based on multisource sensor information fusion is proposed. In the paper, the millimeter wave radar is fused with the camera considering its strong penetration ability in rainy and foggy days and dim environments. Firstly, the spatial coordinates of radar and camera are unified by establishing a spatial fusion model of millimeter wave and camera; then the target acquired by millimeter wave is projected onto the image and the detection frame intersection and ratio model is used to generate the region of interest of the camera; finally, the improved YOLOv2 algorithm is used to identify the region of interest, and in the improved idea, the low‐level information is first connected with the high‐level information in multilayer depth. At the same time, a multiscale feature pyramid network structure is used to achieve recognition of objects of different scales. This model effectively reduces interference from other feature categories while improving the recognition efficiency of the system. The algorithm can effectively improve the recognition accuracy of mechanical materials in low‐contrast scenes, as demonstrated by the validation of different scenes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.