Digitalization in agriculture requires critical research into applications of artificial intelligence to various specialization domains. This work aimed at investigating the application of image synthesis technology to the mitigation of the data volume constraint to digital plant disease phenotyping accuracy. We designed an experiment involving the use of a deep convolutional generative adversarial network (DC-GAN) to synthesize photorealistic data for healthy and bacterial spot disease-infected tomato leaves. The training dataset contained 1,272 instances per class. We further employed a 3-block visual geometry group (VGG) convolutional neural network (CNN) model with dropout regularization and 1 epoch to compare classification accuracies of the original dataset and various synthetic datasets. Our results showed that the third DC-GAN synthesized training dataset containing 3,816 synthetic examples of both healthy and bacterial spot infected tomato leaf classes outperformed the original training dataset containing 1,272 real examples of both tomato leaf classes (77.088% accuracy with the former dataset on a 3-block VGG CNN model with dropout regularization and 1 epoch, as compared to 76.447% accuracy with the latter dataset on the same classifier).
Despite its widespread employment as a highly efficient dimensionality reduction technique, limited research has been carried out on the advantage of Principal Component Analysis (PCA)–based compression/reconstruction of image data to machine learning-based image classification performance and storage space optimization. To address this limitation, we designed a study in which we compared the performances of two Convolutional Neural Network-Random Forest Algorithm (CNN-RF) guava leaf image classification models developed using training data from a number of original guava leaf images contained in a predefined amount of storage space (on the one hand), and a number of PCA compressed/reconstructed guava leaf images contained in the same amount of storage space (on the other hand), on the basis of four criteria – Accuracy, F1-Score, Phi Coefficient and the Fowlkes–Mallows index. Our approach achieved a 1:100 image compression ratio (99.00% image compression) which was comparatively much better than previous results achieved using other algorithms like arithmetic coding (1:1.50), wavelet transform (90.00% image compression), and a combination of three transform-based techniques – Discrete Fourier (DFT), Discrete Wavelet (DWT) and Discrete Cosine (DCT) (1:22.50). From a subjective visual quality perspective, the PCA compressed/reconstructed guava leaf images presented almost no loss of image detail. Finally, the CNN-RF model developed using PCA compressed/reconstructed guava leaf images outperformed the CNN-RF model developed using original guava leaf images by 0.10% accuracy increase, 0.10 F1-Score increase, 0.18 Phi Coefficient increase and 0.09 Fowlkes–Mallows increase.
Within the last decade and a half, ConvNet models have shown increasingly impressive performances on image classification tasks. The continuous quest for better performing ConvNet models is of great importance in that it provides an avenue for the development of superior models which would serve to improve many much-needed services to humankind in domains such as crop pest and disease detection. This paper proposes a feature extraction ConvNet, called Detection, with aim to improve plant disease recognition. Detection contains four convolutional layers, four batch normalization layers and two max-pooling layers. For reference, its plant disease recognition performance was compared with the feature extraction layers for AlexNet, LeNet-5, ZFNet and VGGNet on four datasets from PlantVillage: bacterial spot disease (bell pepper), late blight disease (tomato), leaf mold disease (tomato) and yellow leaf curl disease (tomato). Prior to ablation testing, it achieved the second-highest overall classification accuracy over all four disease recognition datasets (65.50%) after AlexNet (84.33%). However, ablation tests revealed that the removal of the second convolutional layer from the network resulted in a 24.08% increase in overall accuracy on all four disease recognition datasets, up from 65.50% accuracy without the ablation. This surpassed AlexNet's overall accuracy on all four disease recognition datasets (84.33%) by 5.25%. Also, the removal of the second pooling layer from the network resulted in a 23.33% increase in overall accuracy on all four disease recognition datasets, up from 65.50% accuracy without the ablation. This also surpassed AlexNet's overall accuracy on all four disease recognition datasets (84.33%) by 4.40%. The results suggest that the proposed feature extraction ConvNet is a performant method for plant disease recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.