In this paper, we propose SYGNet to strengthen the scene parsing ability of autonomous driving under complicated road conditions. The SYGNet includes feature extraction component and SVD-YOLO GhostNet component. The SVD-YOLO GhostNet component combines Singular Value Decomposition (SVD), You Only Look Once (YOLO) and GhostNet. In the feature extraction component, we propose an algorithm based on VoxelNet to extract point cloud features and image features. In SVD-YOLO GhostNet component, the image data is decomposed by SVD, and we obtain data with stronger spatial and environmental characteristics. YOLOv3 is used to obtain the future map, then convert to GhostNet, which is used to realize the realtime scene parsing. We use KITTI data set to perform our experiments and the results show that the SYGNet is more robust and can further enhance the accuracy of realtime driving scene parsing. The model code, data set, and results of the experiments in this paper are available at: https://github.com/WangHewei16/SYGNetfor-Real-time-Driving-Scene-Parsing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.