With the rise of artificial intelligence and other technologies, unmanned driving has become an important direction of future automobile development. The ground extraction of roads is a key task. The traditional method of ground extraction is limited to flat roads and the speed is slow. In view of the under-segmentation problem between the ground point cloud and the multi-objective object point cloud, we propose a ground segmentation algorithm for point cloud data based on multi-region segmentation. In this paper, the ground data can be accurately found by using the method of concentric region division for point cloud data. At the same time, it also has good robustness for uneven road surface, so as to provide a good foundation for the subsequent segmentation of road obstacles. For the extraction of road obstacles, we adopt the DBSCAN clustering method based on sub-region. According to the characteristics of near-density and distant point cloud, we adopt adaptive parameter selection for different regions to improve the accuracy of obstacle extraction. In order to verify the results, the use of KITTI data set on ubuntu18.04 using ROS system to complete the test, can maintain about 28fps processing speed.
At present, the development of artificial intelligence is very rapid, and the intelligent assisted driving system based on deep learning is widely used in the society. For example, in unmanned driving, it can accurately identify pedestrians, vehicles and traffic signs. Convolutional neural network in deep learning has excellent achievements in the field of computer vision and has outstanding feature extraction ability. Therefore, object detection algorithm based on deep learning is a research hotspot in the field of computer vision at present. We propose a vehicle-pedestrian target detection method based on Yolov4-tiny. Firstly, the ResBlock-D module in the ResNet-D network is used to replace one CSPBlock module in Yolov4- tiny, thus reducing the computational complexity. Then, the coordinate attention mechanism is added to help the model better locate and identify targets. Experimental results show that The improved Yolov4-tiny algorithm has higher curacy than the original algorithm, and the Map is improved by 7.8 %, which has a certain reference value for the study of intelligent assisted driving technology.
Based on the low efficiency and high cost of conventional manual and electrical methods for detecting defects in PCB production, a PCB defect detection method based on YOLOv5 algorithm is proposed, which adds a prediction head for small object detection to form a four-dimensional detection, so as to improve the detection effect of small objects; ASFF (adaptive feature space fusion) is added to YOLOv5s original FPN + PANNET structure for feature fusion to ensure that each space can adaptively fuse different levels of feature information; GAM(global attention mechanism) is added to the original network, and attention operation is applied in all three dimensions , which strengthens the ability of model information extraction. The experimental results show that the improved defect detection method can accurately classify six kinds of defects, and the average accuracy can reach 98.8%. It has a certain reference value for the deep learning PCB defect detection method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.