Diseases and pests are essential threat factors that affect agricultural production, food security supply, and ecological plant diversity. However, the accurate recognition of various diseases and pests is still challenging for existing advanced information and intelligence technologies. Disease and pest recognition is typically a fine-grained visual classification problem, which is easy to confuse the traditional coarse-grained methods due to the external similarity between different categories and the significant differences among each subsample of the same category. Toward this end, this paper proposes an effective graph-related high-order network with feature aggregation enhancement (GHA-Net) to handle the fine-grained image recognition of plant pests and diseases. In our approach, an improved CSP-stage backbone network is first formed to offer massive channel-shuffled features in multiple granularities. Secondly, relying on the multilevel attention mechanism, the feature aggregation enhancement module is designed to exploit distinguishable fine-grained features representing different discriminating parts. Meanwhile, the graphic convolution module is constructed to analyse the graph-correlated representation of part-specific interrelationships by regularizing semantic features into the high-order tensor space. With the collaborative learning of three modules, our approach can grasp the robust contextual details of diseases and pests for better fine-grained identification. Extensive experiments on several public fine-grained disease and pest datasets demonstrate that the proposed GHA-Net achieves better performances in accuracy and efficiency surpassing several other existing models and is more suitable for fine-grained identification applications in complex scenes.
The resulting maps of land use classification obtained by pixel-based methods often have salt-and-pepper noise, which usually shows a certain degree of cluttered distribution of classification image elements within the region. This paper carries out a study on crop classification and identification based on time series Sentinel images and object-oriented methods and takes the crop recognition and classification of the National Modern Agricultural Industrial Park in Jalaid Banner, Inner Mongolia, as the research object. It uses the Google Earth Engine (GEE) cloud platform to extract time series Sentinel satellite radar and optical remote sensing images combined with simple noniterative clustering (SNIC) multiscale segmentation with random forest (RF) and support vector machine (SVM) classification algorithms to classify and identify major regional crops based on radar and spectral features. Compared with the pixel-based method, the combination of SNIC multiscale segmentation and random forest classification based on time series radar and optical remote sensing images can effectively reduce the salt-and-pepper phenomenon in classification and improve crop classification accuracy with the highest accuracy of 98.66 and a kappa coefficient of 0.9823. This study provides a reference for large-scale crop identification and classification work.
This research presents a soft gripper for apple harvesting to provide constant-pressure clamping and avoid fruit damage during slippage, to reduce the potential danger of damage to the apple pericarp during robotic harvesting. First, a three-finger gripper based on the Fin Ray structure is developed, and the influence of varied structure parameters during gripping is discussed accordingly. Second, we develop a mechanical model of the suggested servo-driven soft gripper based on the mappings of gripping force, pulling force, and servo torque. Third, a real-time control strategy for the servo is proposed, to monitor the relative position relationship between the gripper and the fruit by an ultrasonic sensor to avoid damage from the slip between the fruit and fingers. The experimental results show that the proposed soft gripper can non-destructively grasp and separate apples. In outdoor orchard experiments, the damage rate for the grasping experiments of the gripper with the force feedback system turned on was 0%; while the force feedback system was turned off, the damage rate was 20%, averaged for slight and severe damage. The three cases of rigid fingers and soft fingers with or without slip detection under the gripper structure of this study were tested by picking 25 apple samples for each set of experiments. The picking success rate for the rigid fingers was 100% but with a damage rate of 16%; the picking success rate for soft fingers with slip detection was 80%, with no fruit skin damage; in contrast, the picking success rate for soft fingers with slip detection off increased to 96%, and the damage rate was up to 8%. The experimental results demonstrated the effectiveness of the proposed control method.
LNC (leaf nitrogen content) in crops is significant for diagnosing the crop growth status and guiding fertilization decisions. Currently, UAV (unmanned aerial vehicles) remote sensing has played an important role in estimating the nitrogen nutrition of crops at the field scale. However, many existing methods of evaluating crop nitrogen based on UAV imaging techniques usually have used a single type of imagery such as RGB or multispectral images, seldom considering the usage of information fusion from different types of UAV imagery for assessing the crop nitrogen status. In this study, GS (Gram–Schmidt Pan Sharpening) was utilized to fuse images from two sensors of digital RGB and multispectral cameras mounted on UAV, and the specific bands of the multispectral cameras are blue, green, red, rededge and NIR. The color space transformation method, HSV (Hue-Saturation-Value), was used to separate soil background noise from crops due to the high spatial resolution of UAV images. Two methods of optimizing feature variables, the Successive Projection Algorithm (SPA) and the Competitive Adaptive Reweighted Sampling method (CARS), combined with two regularization regression algorithms, LASSO and RIDGE, were adopted to estimate the LNC, compared to the commonly used Random Forest algorithm. The results showed that: (1) the accuracy of LNC estimation using the fusion image is improved distinctly by a comparison to the original multispectral image; (2) the denoised images performed better than the original multispectral images in evaluating LNC in rice; (3) the RIDGE-SPA combined method, using SPA to select the MCARI, SAVI and OSAVI, had the best performance for LNC in rice, with an R2 of 0.76 and an RMSE of 10.33%. It can be demonstrated that the information fusion of multiple-sensor imagery from UAV coupling with the methods of optimizing feature variables can estimate the rice LNC more effectively, which can also provide a reference for guiding the decision making of fertilization in rice fields.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.