Crop growth and yield monitoring over agricultural fields is an essential procedure for food security and agricultural economic return prediction. The advances in remote sensing have enhanced the process of monitoring the development of agricultural crops and estimating their yields. Therefore, remote sensing and GIS techniques were employed, in this study, to predict potato tuber crop yield on three 30 ha center pivot irrigated fields in an agricultural scheme located in the Eastern Region of Saudi Arabia. Landsat-8 and Sentinel-2 satellite images were acquired during the potato growth stages and two vegetation indices (the normalized difference vegetation index (NDVI) and the soil adjusted vegetation index (SAVI)) were generated from the images. Vegetation index maps were developed and classified into zones based on vegetation health statements, where the stratified random sampling points were accordingly initiated. Potato yield samples were collected 2–3 days prior to the harvest time and were correlated to the adjacent NDVI and SAVI, where yield prediction algorithms were developed and used to generate prediction yield maps. Results of the study revealed that the difference between predicted yield values and actual ones (prediction error) ranged between 7.9 and 13.5% for Landsat-8 images and between 3.8 and 10.2% for Sentinel-2 images. The relationship between actual and predicted yield values produced R2 values ranging between 0.39 and 0.65 for Landsat-8 images and between 0.47 and 0.65 for Sentinel-2 images. Results of this study revealed a considerable variation in field productivity across the three fields, where high-yield areas produced an average yield of above 40 t ha-1; while, the low-yield areas produced, on the average, less than 21 t ha-1. Identifying such great variation in field productivity will assist farmers and decision makers in managing their practices.
Monitoring and prediction of within-field crop variability can support farmers to make the right decisions in different situations. The current advances in remote sensing and the availability of high resolution, high frequency, and free Sentinel-2 images improve the implementation of Precision Agriculture (PA) for a wider range of farmers. This study investigated the possibility of using vegetation indices (VIs) derived from Sentinel-2 images and machine learning techniques to assess corn (Zea mays) grain yield spatial variability within the field scale. A 22-ha study field in North Italy was monitored between 2016 and 2018; corn yield was measured and recorded by a grain yield monitor mounted on the harvester machine recording more than 20,000 georeferenced yield observation points from the study field for each season. VIs from a total of 34 Sentinel-2 images at different crop ages were analyzed for correlation with the measured yield observations. Multiple regression and two different machine learning approaches were also tested to model corn grain yield. The three main results were the following: (i) the Green Normalized Difference Vegetation Index (GNDVI) provided the highest R2 value of 0.48 for monitoring within-field variability of corn grain yield; (ii) the most suitable period for corn yield monitoring was a crop age between 105 and 135 days from the planting date (R4–R6); (iii) Random Forests was the most accurate machine learning approach for predicting within-field variability of corn yield, with an R2 value of almost 0.6 over an independent validation set of half of the total observations. Based on the results, within-field variability of corn yield for previous seasons could be investigated from archived Sentinel-2 data with GNDVI at crop stage (R4–R6).
Aim: The recent availability of Sentinel-2 satellites has led to an increasing interest in their use in viticulture. The aim of this short communication is to determine performance and limitation of a Sentinel-2 vegetation index in precision viticulture applications, in terms of correlation and variability assessment, compared to the same vegetation index derived from an unmanned aerial vehicle (UAV). Normalised difference vegetation index (NDVI) was used as reference vegetation index.Methods and Results: UAV and Sentinel-2 vegetation indices were acquired for 30 vineyard blocks located in the south of France without inter-row grass. From the UAV imagery, the vegetation index was calculated using both a mixed pixels approach (both vine and inter-row) and from pure vine-only pixels. In addition, the vine projected area data were extracted using a support vector machine algorithm for vineyard segmentation. The vegetation index was obtained from Sentinel-2 imagery obtained at approximately the same time as the UAV imagery. The Sentinel-2 images used a mixed pixel approach as pixel size is greater than the row width. The correlation between these three layers and the Sentinel-2 derived vegetation indices were calculated, considering spatial autocorrelation correction for the significance test. The Gini coefficient was used to estimate variability detected by each sensor at the within-field scale. The effects of block border and dimension on correlations were estimated.Conclusions: The comparison between Sentinel-2 and UAV vegetation index showed an increase in correlation when border pixels were removed. Block dimensions did not affect the significance of correlation unless blocks were < 0.5 ha. Below this threshold, the correlation was non-significant in most cases. Sentinel-2 acquired data were strongly correlated with UAV-acquired data at both the field (R2 = 0.87) and sub-field scale (R2 = 0.84). In terms of variability detected, Sentinel-2 proved to be able to detect the same amount of variability as the UAV mixed pixel vegetation index.Significance and impact of the study: This study showed at which field conditions the Sentinel-2 vegetation index can be used instead of UAV-acquired images when high spatial resolution (vine-specific) management is not needed and the vineyard is characterised by no inter-row grass. This type of information may help growers to choose the most appropriate information sources to detect variability according to their vineyard characteristics.
Over the last few years, several Convolutional Neural Networks for object detection have been proposed, characterised by different accuracy and speed. In viticulture, yield estimation and prediction is used for efficient crop management, taking advantage of precision viticulture techniques. Convolutional Neural Networks for object detection represent an alternative methodology for grape yield estimation, which usually relies on manual harvesting of sample plants. In this paper, six versions of the You Only Look Once (YOLO) object detection algorithm (YOLOv3, YOLOv3-tiny, YOLOv4, YOLOv4-tiny, YOLOv5x, and YOLOv5s) were evaluated for real-time bunch detection and counting in grapes. White grape varieties were chosen for this study, as the identification of white berries on a leaf background is trickier than red berries. YOLO models were trained using a heterogeneous dataset populated by images retrieved from open datasets and acquired on the field in several illumination conditions, background, and growth stages. Results have shown that YOLOv5x and YOLOv4 achieved an F1-score of 0.76 and 0.77, respectively, with a detection speed of 31 and 32 FPS. Differently, YOLO5s and YOLOv4-tiny achieved an F1-score of 0.76 and 0.69, respectively, with a detection speed of 61 and 196 FPS. The final YOLOv5x model for bunch number, obtained considering bunch occlusion, was able to estimate the number of bunches per plant with an average error of 13.3% per vine. The best combination of accuracy and speed was achieved by YOLOv4-tiny, which should be considered for real-time grape yield estimation, while YOLOv3 was affected by a False Positive–False Negative compensation, which decreased the RMSE.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.