Recent computer vision techniques based on convolutional neural networks (CNNs) are considered state-of-the-art tools in weed mapping. However, their performance has been shown to be sensitive to image quality degradation. Variation in lighting conditions adds another level of complexity to weed mapping. We focus on determining the influence of image quality and light consistency on the performance of CNNs in weed mapping by simulating the image formation pipeline. Faster Region-based CNN (R-CNN) and Mask R-CNN were used as CNN examples for object detection and instance segmentation, respectively, while semantic segmentation was represented by Deeplab-v3. The degradations simulated in this study included resolution reduction, overexposure, Gaussian blur, motion blur, and noise. The results showed that the CNN performance was most impacted by resolution, regardless of plant size. When the training and testing images had the same quality, Faster R-CNN and Mask R-CNN were moderately tolerant to low levels of overexposure, Gaussian blur, motion blur, and noise. Deeplab-v3, on the other hand, tolerated overexposure, motion blur, and noise at all tested levels. In most cases, quality inconsistency between the training and testing images reduced CNN performance. However, CNN models trained on low-quality images were more tolerant against quality inconsistency than those trained by high-quality images. Light inconsistency also reduced CNN performance. Increasing the diversity of lighting conditions in the training images may alleviate the performance reduction but does not provide the same benefit from the number increase of images with the same lighting condition. These results provide insights into the impact of image quality and light consistency on CNN performance. The quality threshold established in this study can be used to guide the selection of camera parameters in future weed mapping applications.
Italian ryegrass (Lolium perenne ssp. multiflorum (Lam) Husnot) is a troublesome weed species in wheat (Triticum aestivum) production in the United States, severely affecting grain yields. Spatial mapping of ryegrass infestation in wheat fields and early prediction of its impact on yield can assist management decision making. In this study, unmanned aerial systems (UAS)-based red, green and blue (RGB) imageries acquired at an early wheat growth stage in two different experimental sites were used for developing predictive models. Deep neural networks (DNNs) coupled with an extensive feature selection method were used to detect ryegrass in wheat and estimate ryegrass canopy coverage. Predictive models were developed by regressing early-season ryegrass canopy coverage (%) with end-of-season (at wheat maturity) biomass and seed yield of ryegrass, as well as biomass and grain yield reduction (%) of wheat. Italian ryegrass was detected with high accuracy (precision = 95.44 ± 4.27%, recall = 95.48 ± 5.05%, F-score = 95.56 ± 4.11%) using the best model which included four features: hue, saturation, excess green index, and visible atmospheric resistant index. End-of-season ryegrass biomass was predicted with high accuracy (R2 = 0.87), whereas the other variables had moderate to high accuracy levels (R2 values of 0.74 for ryegrass seed yield, 0.73 for wheat biomass reduction, and 0.69 for wheat grain yield reduction). The methodology demonstrated in the current study shows great potential for mapping and quantifying ryegrass infestation and predicting its competitive response in wheat, allowing for timely management decisions.
In recent years, Unmanned Aerial Systems (UAS) have emerged as an innovative technology to provide spatio-temporal information about weed species in crop fields. Such information is a critical input for any site-specific weed management program. A multi-rotor UAS (Phantom 4) equipped with an RGB sensor was used to collect imagery in three bands (Red, Green, and Blue; 0.8 cm/pixel resolution) with the objectives of (a) mapping weeds in cotton and (b) determining the relationship between image-based weed coverage and ground-based weed densities. For weed mapping, three different weed density levels (high, medium, and low) were established for a mix of different weed species, with three replications. To determine weed densities through ground truthing, five quadrats (1 m × 1 m) were laid out in each plot. The aerial imageries were preprocessed and subjected to Hough transformation to delineate cotton rows. Following the separation of inter-row vegetation from crop rows, a multi-level classification coupled with machine learning algorithms were used to distinguish intra-row weeds from cotton. Overall, accuracy levels of 89.16%, 85.83%, and 83.33% and kappa values of 0.84, 0.79, and 0.75 were achieved for detecting weed occurrence in high, medium, and low density plots, respectively. Further, ground-truthing based overall weed density values were fairly correlated (r2 = 0.80) with image-based weed coverage assessments. Among the specific weed species evaluated, Palmer amaranth (Amaranthus palmeri S. Watson) showed the highest correlation (r2 = 0.91) followed by red sprangletop (Leptochloa mucronata Michx) (r2 = 0.88). The results highlight the utility of UAS-borne RGB imagery for weed mapping and density estimation in cotton for precision weed management.
Site-specific treatment of weeds in agricultural landscapes has been gaining importance in recent years due to economic savings and minimal impact on the environment. Different detection methods have been developed and tested for precision weed management systems, but recent developments in neural networks have offered great prospects. However, a major limitation with the neural network models is the requirement of high volumes of data for training. The current study aims at exploring an alternative approach to the use of real images to address this issue. In this study, synthetic images were generated with various strategies using plant instances clipped from UAV-borne real images. In addition, the Generative Adversarial Networks (GAN) technique was used to generate fake plant instances which were used in generating synthetic images. These images were used to train a powerful convolutional neural network (CNN) known as "Mask R-CNN" for weed detection and segmentation in a transfer learning mode. The study was conducted on morningglories (MG) and grass weeds (Grass) infested in cotton. The biomass for individual weeds was also collected in the field for biomass modeling using detection and segmentation results derived from model inference. Results showed a comparable performance between the real plant-based synthetic image (mean average precision for mask-mAPm: 0.60; mean average precision for bounding box-mAPb: 0.64) and real image datasets (mAPm: 0.80; mAPb: 0.81). However, the mixed dataset (real image + real plant instance-based synthetic image dataset) resulted in no performance gain for segmentation mask whereas a very small performance gain for bounding box (mAPm: 0.80; mAPb: 0.83). Around 40–50 plant instances were sufficient for generating synthetic images that resulted in optimal performance. Row orientation of cotton in the synthetic images was beneficial compared to random-orientation. Synthetic images generated with automatically-clipped plant instances performed similarly to the ones generated with manually-clipped instances. Generative Adversarial Networks-derived fake plant instances-based synthetic images did not perform as effectively as real plant instance-based synthetic images. The canopy mask area predicted weed biomass better than bounding box area with R2 values of 0.66 and 0.46 for MG and Grass, respectively. The findings of this study offer valuable insights for guiding future endeavors oriented towards using synthetic images for weed detection and segmentation, and biomass estimation in row crops.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.