Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
With more and more wind turbines coming into operation, inspecting wind farms has become a challenging task. Currently, the inspection robot has been applied to inspect some essential parts of the wind turbine nacelle. The detection of multiple objects in the wind turbine nacelle is a prerequisite for the condition monitoring of some essential parts of the nacelle by the inspection robot. In this paper, we improve the original YOLOX-Nano model base on the short monitoring time of the inspected object by the inspection robot and the slow inference speed of the original YOLOX-Nano. The accuracy and inference speed of the improved YOLOX-Nano model are enhanced, and especially, the inference speed of the model is improved by 72.8%, and it performs better than other lightweight network models on embedded devices. The improved YOLOX-Nano greatly satisfies the need for a high-precision, low-latency algorithm for multi-object detection in wind turbine nacelle.
With more and more wind turbines coming into operation, inspecting wind farms has become a challenging task. Currently, the inspection robot has been applied to inspect some essential parts of the wind turbine nacelle. The detection of multiple objects in the wind turbine nacelle is a prerequisite for the condition monitoring of some essential parts of the nacelle by the inspection robot. In this paper, we improve the original YOLOX-Nano model base on the short monitoring time of the inspected object by the inspection robot and the slow inference speed of the original YOLOX-Nano. The accuracy and inference speed of the improved YOLOX-Nano model are enhanced, and especially, the inference speed of the model is improved by 72.8%, and it performs better than other lightweight network models on embedded devices. The improved YOLOX-Nano greatly satisfies the need for a high-precision, low-latency algorithm for multi-object detection in wind turbine nacelle.
Crab aquaculture is an important component of the freshwater aquaculture industry in China, encompassing an expansive farming area of over 6000 km2 nationwide. Currently, crab farmers rely on manually monitored feeding platforms to count the number and assess the distribution of crabs in the pond. However, this method is inefficient and lacks automation. To address the problem of efficient and rapid detection of crabs via automated systems based on machine vision in low-brightness underwater environments, a two-step color correction and improved dark channel prior underwater image processing approach for crab detection is proposed in this paper. Firstly, the parameters of the dark channel prior are optimized with guided filtering and quadtrees to solve the problems of blurred underwater images and artificial lighting. Then, the gray world assumption, the perfect reflection assumption, and a strong channel to compensate for the weak channel are applied to improve the pixels of red and blue channels, correct the color of the defogged image, optimize the visual effect of the image, and enrich the image information. Finally, ShuffleNetV2 is applied to optimize the target detection model to improve the model detection speed and real-time performance. The experimental results show that the proposed method has a detection rate of 90.78% and an average confidence level of 0.75. Compared with the improved YOLOv5s detection results of the original image, the detection rate of the proposed method is increased by 21.41%, and the average confidence level is increased by 47.06%, which meets a good standard. This approach could effectively build an underwater crab distribution map and provide scientific guidance for crab farming.
Underwater robot perception is a critical task. Due to the complex underwater environment and low quality of optical images, it is difficult to obtain accurate and stable target position information using traditional methods, making it unable to meet practical use requirements. The relatively low computing power of underwater robots prevents them from supporting real-time detection with complex model algorithms for deep learning. To resolve the above problems, a lightweight underwater target detection and recognition algorithm based on knowledge distillation optimization is proposed based on the YOLOv5-lite model. Firstly, a dynamic sampling Transformer module is proposed. After the feature matrix is sparsely sampled, the query matrix is dynamically shifted to achieve the purpose of targeted attention modeling. Additionally, the shared kernel parameter convolution is used to optimize the matrix encoding and simplify the forward-propagation memory overhead. Then, a distillation method with decoupled localization and recognition is designed in the model-training process. The ability to transfer the effective localization knowledge of the positive sample boxes is enhanced, which ensures that the model maintains the same number of parameters to improve the detection accuracy. Validated by real offshore underwater image data, the experimental results show that our method provides an improvement of 6.6% and 5.0% over both baseline networks with different complexity models under the statistical index of detection accuracy mAP, which also suggests 58.8% better efficiency than models such as the standard YOLOv5. Through a comparison with other mainstream single-stage networks, the effectiveness and sophistication of the proposed algorithm are validated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.