Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
In view of the low levels of speed and precision associated with fault detection in transmission lines using traditional algorithms due to resource constraints, a transmission line fault target-detection method for YOLOv8 (You Only Look Once version 8) based on the Rep (Representational Pyramid) Visual Transformer and incorporating an ultra-lightweight module is proposed. First, the YOLOv8 detection network was built. In order to address the needs of feature redundancy and high levels of network computation, the Rep Visual Transformer module was introduced in the Neck part to integrate the pixel information associated with the entire image through its multi-head self-attention and enable the model to learn more global image features, thereby improving the computational speed of the model; then, a lightweight GSConv (Grouped and Separated Convolution, a combination of grouped convolution and separated convolution) convolution module was added to the Backbone and Neck to share computing resources among channels and reduce computing time and memory consumption, by which the computational cost and detection performance of the detection network were balanced, while the model remained lightweight and maintained its high precision. Secondly, the loss function Wise-IoU (Intelligent IOU) was introduced as the Bounding-Box Regression (BBR) loss function to optimize the predicted bounding boxes in these grid cells and shift them closer to the real target location, which reduced the harmful gradients caused by low-quality examples and further improved the detection precision of the algorithm. Finally, the algorithm was verified using a data set of 3500 images compiled by a power-supply inspection department over the past four years. The experimental results show that, compared with the seven classic and improved algorithms, the recall rate and average precision of the proposed algorithm were improved by 0.058 and 0.053, respectively, compared with the original YOLOv8 detection network; the floating-point operations per second decreased by 2.3; and the picture detection speed was increased to 114.9 FPS.
In view of the low levels of speed and precision associated with fault detection in transmission lines using traditional algorithms due to resource constraints, a transmission line fault target-detection method for YOLOv8 (You Only Look Once version 8) based on the Rep (Representational Pyramid) Visual Transformer and incorporating an ultra-lightweight module is proposed. First, the YOLOv8 detection network was built. In order to address the needs of feature redundancy and high levels of network computation, the Rep Visual Transformer module was introduced in the Neck part to integrate the pixel information associated with the entire image through its multi-head self-attention and enable the model to learn more global image features, thereby improving the computational speed of the model; then, a lightweight GSConv (Grouped and Separated Convolution, a combination of grouped convolution and separated convolution) convolution module was added to the Backbone and Neck to share computing resources among channels and reduce computing time and memory consumption, by which the computational cost and detection performance of the detection network were balanced, while the model remained lightweight and maintained its high precision. Secondly, the loss function Wise-IoU (Intelligent IOU) was introduced as the Bounding-Box Regression (BBR) loss function to optimize the predicted bounding boxes in these grid cells and shift them closer to the real target location, which reduced the harmful gradients caused by low-quality examples and further improved the detection precision of the algorithm. Finally, the algorithm was verified using a data set of 3500 images compiled by a power-supply inspection department over the past four years. The experimental results show that, compared with the seven classic and improved algorithms, the recall rate and average precision of the proposed algorithm were improved by 0.058 and 0.053, respectively, compared with the original YOLOv8 detection network; the floating-point operations per second decreased by 2.3; and the picture detection speed was increased to 114.9 FPS.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.