Early detection of grapevine viral diseases is critical for early interventions in order to prevent the disease from spreading to the entire vineyard. Hyperspectral remote sensing can potentially detect and quantify viral diseases in a nondestructive manner. This study utilized hyperspectral imagery at the plant level to identify and classify grapevines inoculated with the newly discovered DNA virus grapevine vein-clearing virus (GVCV) at the early asymptomatic stages. An experiment was set up at a test site at South Farm Research Center, Columbia, MO, USA (38.92 N, −92.28 W), with two grapevine groups, namely healthy and GVCV-infected, while other conditions were controlled. Images of each vine were captured by a SPECIM IQ 400–1000 nm hyperspectral sensor (Oulu, Finland). Hyperspectral images were calibrated and preprocessed to retain only grapevine pixels. A statistical approach was employed to discriminate two reflectance spectra patterns between healthy and GVCV vines. Disease-centric vegetation indices (VIs) were established and explored in terms of their importance to the classification power. Pixel-wise (spectral features) classification was performed in parallel with image-wise (joint spatial–spectral features) classification within a framework involving deep learning architectures and traditional machine learning. The results showed that: (1) the discriminative wavelength regions included the 900–940 nm range in the near-infrared (NIR) region in vines 30 days after sowing (DAS) and the entire visual (VIS) region of 400–700 nm in vines 90 DAS; (2) the normalized pheophytization index (NPQI), fluorescence ratio index 1 (FRI1), plant senescence reflectance index (PSRI), anthocyanin index (AntGitelson), and water stress and canopy temperature (WSCT) measures were the most discriminative indices; (3) the support vector machine (SVM) was effective in VI-wise classification with smaller feature spaces, while the RF classifier performed better in pixel-wise and image-wise classification with larger feature spaces; and (4) the automated 3D convolutional neural network (3D-CNN) feature extractor provided promising results over the 2D convolutional neural network (2D-CNN) in learning features from hyperspectral data cubes with a limited number of samples.
Leaf chlorophyll concentration (LCC) is an important indicator of plant health, vigor, physiological status, productivity, and nutrient deficiencies. Hyperspectral spectroscopy at leaf level has been widely used to estimate LCC accurately and non-destructively. This study utilized leaf-level hyperspectral data with derivative calculus and machine learning to estimate LCC of sorghum. We calculated fractional derivative (FD) orders starting from 0.2 to 2.0 with 0.2 order increments. Additionally, 43 common vegetation indices (VIs) were calculated from leaf spectral reflectance factor to make comparisons with reflectance-based data. Within the modeling pipeline, three feature selection methods were assessed: Pearson’s correlation coefficient (PCC), partial least squares based variable importance in the projection (VIP), and random forest-based mean decrease impurity (MDI). Finally, we used partial least squares regression (PLSR), random forest regression (RFR), support vector regression (SVR), and extreme learning regression (ELR) to estimate the LCC of sorghum. Results showed that: (1) increasing derivative order can show improved model performance until certain order for reflectance-based analysis; however, it is inconclusive to state that a particular order is optimal for estimating LCC of sorghum; (2) VI-based modeling outperformed derivative augmented reflectance factor-based modeling; (3) mean decrease impurity was found effective in selecting sensitive features from large feature space (reflectance-based analysis), whereas simple Pearson’s correlation coefficient worked better with smaller feature space (VI-based analysis); and (4) SVR outperformed all other models within reflectance-based analysis; alternatively, ELR with VIs from original reflectance yielded slightly better results compared to all other models.
Recent advances in unmanned aerial vehicles (UAV), mini and mobile sensors, and GeoAI (a blend of geospatial and artificial intelligence (AI) research) are the main highlights among agricultural innovations to improve crop productivity and thus secure vulnerable food systems. This study investigated the versatility of UAV-borne multisensory data fusion within a framework of multi-task deep learning for high-throughput phenotyping in maize. UAVs equipped with a set of miniaturized sensors including hyperspectral, thermal, and LiDAR were collected in an experimental corn field in Urbana, IL, USA during the growing season. A full suite of eight phenotypes was in situ measured at the end of the season for ground truth data, specifically, dry stalk biomass, cob biomass, dry grain yield, harvest index, grain nitrogen utilization efficiency (Grain NutE), grain nitrogen content, total plant nitrogen content, and grain density. After being funneled through a series of radiometric calibrations and geo-corrections, the aerial data were analytically processed in three primary approaches. First, an extended version normalized difference spectral index (NDSI) served as a simple arithmetic combination of different data modalities to explore the correlation degree with maize phenotypes. The extended NDSI analysis revealed the NIR spectra (750–1000 nm) alone in a strong relation with all of eight maize traits. Second, a fusion of vegetation indices, structural indices, and thermal index selectively handcrafted from each data modality was fed to classical machine learning regressors, Support Vector Machine (SVM) and Random Forest (RF). The prediction performance varied from phenotype to phenotype, ranging from R2 = 0.34 for grain density up to R2 = 0.85 for both grain nitrogen content and total plant nitrogen content. Further, a fusion of hyperspectral and LiDAR data completely exceeded limitations of single data modality, especially addressing the vegetation saturation effect occurring in optical remote sensing. Third, a multi-task deep convolutional neural network (CNN) was customized to take a raw imagery data fusion of hyperspectral, thermal, and LiDAR for multi-predictions of maize traits at a time. The multi-task deep learning performed predictions comparably, if not better in some traits, with the mono-task deep learning and machine learning regressors. Data augmentation used for the deep learning models boosted the prediction accuracy, which helps to alleviate the intrinsic limitation of a small sample size and unbalanced sample classes in remote sensing research. Theoretical and practical implications to plant breeders and crop growers were also made explicit during discussions in the studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.