Reducing traffic accident occurrences and enhancing road safety can be achieved through the processing of real-time surveillance image information for saliency object detection. Although existing saliency object detection methods based on real-time monitoring image information for intelligent driving have yielded certain results, there remain some shortcomings. In complex road environments, distinguishing between background and salient targets with existing methods proves difficult, resulting in false and missed detections. Consequently, this study investigates a saliency object detection method based on real-time monitoring image information for intelligent driving. The Visual Geometry Group (VGG) network discriminator in Enhanced Super-Resolution Generative Adversarial Networks (ESRGAN) is modified, and techniques such as spectral normalization (SN) are employed to improve the dynamic stability of training. Pixel-level image size amplification and feature enhancement are conducted on the salient objects in the dataset, providing a richer data foundation for subsequent real-time monitoring of saliency target detection and defect classification. The YOLOv5s algorithm is utilized as the identification network, and the original YOLOv5s backbone network is replaced with the MobileNetV2 network, significantly reducing network complexity and enhancing identification efficiency. The algorithm's performance in recognizing salient targets in real-time monitoring images for intelligent driving is further improved through network optimizer optimization and clustering algorithm adoption. The efficacy of the proposed method is substantiated by experimental results.
The conventional method to locate acupuncture points (acupoints) on human body requires the massagists to have rich experience and skillful performance, and the learning cost is always high. The visual positioning technology of massage acupoints based on image registration can lower the technical difficulty, thereby allowing more people to enjoy and benefit from massage therapy. However, existing algorithms for this technology generally have a series of shortcomings including the unstable matching results, the inaccurate image registration effect, and the unsatisfactory results in case of obvious local deformation or occlusion. In view of these matters, this paper studied a novel visual positioning algorithm for acupoints based on image registration. At first, an Image Acupoints Positioning algorithm was proposed based on Convolution Neural Network (CNN-based IAP algorithm), the algorithm can combine the prior information of acupoint positions in visual images with 3D CNN, which has a stronger feature expression ability, and maintain high positioning accuracy under unfavorable conditions such as image noise, illumination change, or occlusion. Then, based on the structure of Fully Convolutional Network (FCN), a multi-scale parallel FCN was constructed, which has introduced the techniques of multiscale parallel downsampling, spatial pyramid of dilated convolutions, adaptive channel attention mechanism, direction perception, and upsampling, intending to improve the model's performance in non-rigid registration of the visual images of massage acupoints. At last, the validity of the proposed model was verified by experimental results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.