Background: Banana (Musa spp.) is the most popular marketable fruit crop grown all over the world, and a dominant staple food in many developing countries. Worldwide, banana production is affected by numerous diseases and pests. Novel and rapid methods for the timely detection of pests and diseases will allow to surveil and develop control measures with greater efficiency. As deep convolutional neural networks (DCNN) and transfer learning has been successfully applied in various fields, it has freshly moved in the domain of just-in-time crop disease detection. The aim of this research is to develop an AI-based banana disease and pest detection system using a DCNN to support banana farmers. Results: Large datasets of expert pre-screened banana disease and pest symptom/damage images were collected from various hotspots in Africa and Southern India. To build a detection model, we retrained three different convolutional neural network (CNN) architectures using a transfer learning approach. A total of six different models were developed from 18 different classes (disease by plant parts) using images collected from different parts of the banana plant. Our studies revealed ResNet50 and InceptionV2 based models performed better compared to MobileNetV1. These architectures represent the state-of-the-art results of banana diseases and pest detection with an accuracy of more than 90% in most of the models tested. These experimental results were comparable with other state-of-the-art models found in the literature. With a future view to run these detection capabilities on a mobile device, we evaluated the performance of SSD (single shot detector) MobileNetV1. Performance and validation metrics were also computed to measure the accuracy of different models in automated disease detection methods. Conclusion: Our results showed that the DCNN was a robust and easily deployable strategy for digital banana disease and pest detection. Using a pre-trained disease recognition model, we were able to perform deep transfer learning (DTL) to produce a network that can make accurate predictions. This significant high success rate makes the model a useful early disease and pest detection tool, and this research could be further extended to develop a fully automated mobile app to help millions of banana farmers in developing countries.
Background: Rapid non-destructive measurements to predict cassava root yield over the full growing season through large numbers of germplasm and multiple environments is a huge challenge in Cassava breeding programs. As opposed to waiting until the harvest season, multispectral imagery using unmanned aerial vehicles (UAV) are capable of measuring the canopy metrics and vegetation indices (VIs) traits at different time points of the growth cycle. This resourceful time series aerial image processing with appropriate analytical framework is very important for the automatic extraction of phenotypic features from the image data. Many studies have demonstrated the usefulness of advanced remote sensing technologies coupled with machine learning (ML) approaches for accurate prediction of valuable crop traits. Until now, Cassava has received little to no attention in aerial image-based phenotyping and ML model testing. Results: To accelerate image processing, an automated image-analysis framework called CIAT Pheno-i was developed to extract plot level vegetation indices/canopy metrics. Multiple linear regression models were constructed at different key growth stages of cassava, using ground-truth data and vegetation indices obtained from a multispectral sensor. Henceforth, the spectral indices/features were combined to develop models and predict cassava root yield using different Machine learning techniques. Our results showed that (1) Developed CIAT pheno-i image analysis framework was found to be easier and more rapid than manual methods. (2) The correlation analysis of four phenological stages of cassava revealed that elongation (EL) and late bulking (LBK) were the most useful stages to estimate above-ground biomass (AGB), below-ground biomass (BGB) and canopy height (CH). (3) The multi-temporal analysis revealed that cumulative image feature information of EL + early bulky (EBK) stages showed a higher significant correlation (r = 0.77) for Green Normalized Difference Vegetation indices (GNDVI) with BGB than individual time points.
Challenges in rapid prototyping are a major bottleneck for plant breeders trying to develop the needed cultivars to feed a growing world population. Remote sensing techniques, particularly LiDAR, have proven useful in the quick phenotyping of many characteristics across a number of popular crops. However, these techniques have not been demonstrated with cassava, a crop of global importance as both a source of starch as well as animal fodder. In this study, we demonstrate the applicability of using terrestrial LiDAR for the determination of cassava biomass through binned height estimations, total aboveground biomass and total leaf biomass. We also tested using single LiDAR scans versus multiple registered scans for estimation, all within a field setting. Our results show that while the binned height does not appear to be an effective method of aboveground phenotyping, terrestrial laser scanners can be a reliable tool in acquiring surface biomass data in cassava. Additionally, we found that using single scans versus multiple scans provides similarly accurate correlations in most cases, which will allow for the 3D phenotyping method to be conducted even more rapidly than expected.
Background: Rapid non-destructive measurements to predict cassava root yield over the full growing season through large numbers of germplasm and multiple environments is a huge challenge in Cassava breeding programs. As opposed to waiting until the harvest season, multispectral imagery using unmanned aerial vehicles (UAV) are capable of measuring the canopy metrics and vegetation indices (VIs) traits at different time points of the growth cycle. This resourceful time series aerial image processing with appropriate analytical framework is very important for the automatic extraction of phenotypic features from the image data. Many studies have demonstrated the usefulness of advanced remote sensing technologies coupled with machine learning (ML) approaches for accurate prediction of valuable crop traits. Until now, Cassava has received little to no attention in aerial image-based phenotyping and ML model testing. Results : To accelerate image processing, automated image-analysis framework called CIAT Pheno-i was developed to extract plot level vegetation indices/canopy metrics. Multiple linear regression models were constructed at different key growth stages of cassava, using ground-truth data and vegetation indices obtained from a multispectral sensor. Henceforth, the spectral indices/features were combined to develop models and predict cassava root yield using different Machine learning techniques. Our results showed that (1) Developed CIAT pheno-i image analysis framework was found to be easier and more rapid than manual methods. (2) The correlation analysis of four phenological stages of cassava revealed that elongation (EL) and late bulking (LBK) were the most useful stages to estimate above-ground (AGB), below-ground biomass (BGB) and canopy height (CH). (3) The multi-temporal analysis revealed that cumulative image feature information of EL+EBK stages showed a higher significant correlation ( r = 0.77 for GNDVI) with BGB than individual time points. Canopy height measured on the ground correlated well with UAV (CHuav)-based measurements ( r = 0.92) at late bulking (LBK) stage. Among different image features, normalized difference red edge index (NDRE) data were found to be consistently highly correlated ( r = 0.65 to 0.84) with ABG at LBK stage. (4) Among the four ML algorithms used in this study, k-Nearest Neighbours (kNN), Random Forest (RF) and Support Vector Machine (SVM) showed the best performance for root yield prediction with the highest accuracy of R 2 = 0.67, 0.66 and 0.64, respectively. Conclusion : UAV platforms, time series image acquisition, automated image analytical framework (CIAT pheno-i), and key vegetation indices (VIs) to estimate phenotyping traits and root yield described in this work have great potential for use as a selection tool in the modern cassava breeding programs around the world to accelerate germplasm and varietal selection. The image analysis software (CIAT pheno-i) developed from this study can be widely applicable to any other crop to extract phenotypic information rapidly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.