In order to develop an underwater sea cucumber collecting robot, it is necessary to use the machine vision method to realize sea cucumber recognition and location. An identification and location method of underwater sea cucumber based on improved You Only Look Once version 5 (YOLOv5) is proposed. Due to the low contrast between sea cucumbers and the underwater environment, the Multi-Scale Retinex with Color Restoration (MSRCR) algorithm was introduced to process the images to enhance the contrast. In order to improve the recognition precision and efficiency, the Convolutional Block Attention Module (CBAM) is added. In order to make small target recognition more precise, the Detect layer was added to the Head network of YOLOv5s. The improved YOLOv5s model and YOLOv5s, YOLOv4, and Faster-RCNN identified the same image set; the experimental results show improved YOLOv5 recognition precision level and confidence level, especially for small target recognition, which is excellent and better than other models. Compared to the other three models, the improved YOLOv5s has higher precision and detection time. Compared with the YOLOv5s, the precision and recall rate of the improved YOLOv5s model are improved by 9% and 11.5%, respectively.
To solve the chip location recognition problem, this paper proposes a lightweight E-YOLOv5 based chip detection algorithm based on the You Only Look Once version 5 (YOLOv5s) algorithm. For the problem of the difficult distinction between chip detection points and light spots, a simulated exposure algorithm is used to process part of the training set images to enhance model robustness; the existing model network is complex, and EfficientNet, a lightweight feature extraction network, is introduced to reduce the model size; for the problem of imprecise model recognition due to small detection points, Selective Kernel Neural Network (SKNet) module is introduced into EfficientNet is introduced to enhance the feature extraction ability of the model and improve the training efficiency, and Efficient Intersection over Union Loss (EIoU_Loss) is used as the loss function to reduce the false recognition rate. Experiments show that the algorithm in this paper improves by 3.85% and 3.92% in precision, recall rate, 28.89% in loss value, nearly 20% in model size and training time, and 46.67% in image processing speed on CPU compared with YOLOv5s. The experimental results show that the proposed algorithm outperforms other algorithms and is able to distinguish and identify chip locations precisely and stably.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.