The accurate spraying of herbicides and intelligent mechanical weeding operations are the main ways to reduce the use of chemical pesticides in fields and achieve sustainable agricultural development, and an important prerequisite for achieving these is to identify field crops and weeds accurately and quickly. To this end, a semantic segmentation model based on an improved U-Net is proposed in this paper to address the issue of efficient and accurate identification of vegetable crops and weeds. First, the simplified visual group geometry 16 (VGG16) network is used as the coding network of the improved model, and then, the input images are continuously and naturally down-sampled using the average pooling layer to create feature maps of various sizes, and these feature maps are laterally integrated from the network into the coding network of the improved model. Then, the number of convolutional layers of the decoding network of the model is cut and the efficient channel attention (ECA) is introduced before the feature fusion of the decoding network, so that the feature maps from the jump connection in the encoding network and the up-sampled feature maps in the decoding network pass through the ECA module together before feature fusion. Finally, the study uses the obtained Chinese cabbage and weed images as a dataset to compare the improved model with the original U-Net model and the current commonly used semantic segmentation models PSPNet and DeepLab V3+. The results show that the mean intersection over union and mean pixel accuracy of the improved model increased in comparison to the original U-Net model by 1.41 and 0.72 percentage points, respectively, to 88.96% and 93.05%, and the processing time of a single image increased by 9.36 percentage points to 64.85 ms. In addition, the improved model in this paper has a more accurate segmentation effect on weeds that are close to and overlap with crops compared to the other three comparison models, which is a necessary condition for accurate spraying and accurate weeding. As a result, the improved model in this paper can offer strong technical support for the development of intelligent spraying robots and intelligent weeding robots.
Accurate crop detection is the prerequisite for the operation of intelligent agricultural machinery. Image recognition usually lacks accurate orientation information, and Lidar point clouds are not easy to distinguish different objects. Fortunately, the fusion of images and Lidar points can complement each other. This research aimed to detect maize (Zea mays L.) seedlings by fusing Lidar data with images. By applying coordinate transformation and time stamps, the images and Lidar points were realized homogeneous in spatial as well as temporal dimensions. Deep learning was used to develop a maize seedling recognition model, then the model recognized maize seedlings by labeling them with bounding boxes. Meanwhile, Lidar points were mapped to the bounding boxes. Only one-third of points that fell into the right middle of bounding boxes were selected for clustering operation, the calculated center of the cluster provided spatial information for target maize seedlings. This study modified the classical single shot multi-box detector (SSD) by merely linking the last feature map to the final output layer, owing to the higher feature maps having the unique advantages of detecting relatively larger objects. In images, maize seedlings were just the largest objects owing to be shot on purpose. This modification enabled the recognition model to finish recognizing an image by only consuming around 60 ms, which saved about 10 ms/image compared with the classical SSD model. The experiment was conducted in a maize field, and the maize was during the elongation stage. Experimental results demonstrated that the standard deviations for maximum distance error and maximum angle error were 1.4 cm and 1.1°, respectively, which can be tolerated under current technical requirements. Since agricultural fields are subject to staple crop-orientated and changeable ambient environment, the fusion of images and Lidar points can derive more precision information, and make agricultural machinery smarter. This study can act as an upstream technology for other researches on intelligent agricultural machinery.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.