Food quality and safety issues occurred frequently in recent years, which have attracted more and more attention of social and international organizations. Considering the increased quality risk in the food supply chain, many researchers have applied various information technologies to develop real-time risk identification and traceability systems (RITSs) for preferable food safety guarantee. This paper presents an innovative approach by utilizing the deep-stacking network method for hazardous risk identification, which relies on massive multisource data monitored by the Internet of Things timely in the whole food supply chain. The aim of the proposed method is to help managers and operators in food enterprises to find accurate risk levels of food security in advance and to provide regulatory authorities and consumers with potential rules for better decision-making, thereby maintaining the safety and sustainability of food product supply. The verification experiments show that the proposed method has the best performance in terms of prediction accuracy up to 97.62%, meanwhile achieves the appropriate model parameters only up to 211.26 megabytes. Moreover, the case analysis is implemented to illustrate the outperforming performance of the proposed method in risk level identification. It can effectively enhance the RITS ability for assuring food supply chain security and attaining multiple cooperation between regulators, enterprises, and consumers.
Diseases and pests are essential threat factors that affect agricultural production, food security supply, and ecological plant diversity. However, the accurate recognition of various diseases and pests is still challenging for existing advanced information and intelligence technologies. Disease and pest recognition is typically a fine-grained visual classification problem, which is easy to confuse the traditional coarse-grained methods due to the external similarity between different categories and the significant differences among each subsample of the same category. Toward this end, this paper proposes an effective graph-related high-order network with feature aggregation enhancement (GHA-Net) to handle the fine-grained image recognition of plant pests and diseases. In our approach, an improved CSP-stage backbone network is first formed to offer massive channel-shuffled features in multiple granularities. Secondly, relying on the multilevel attention mechanism, the feature aggregation enhancement module is designed to exploit distinguishable fine-grained features representing different discriminating parts. Meanwhile, the graphic convolution module is constructed to analyse the graph-correlated representation of part-specific interrelationships by regularizing semantic features into the high-order tensor space. With the collaborative learning of three modules, our approach can grasp the robust contextual details of diseases and pests for better fine-grained identification. Extensive experiments on several public fine-grained disease and pest datasets demonstrate that the proposed GHA-Net achieves better performances in accuracy and efficiency surpassing several other existing models and is more suitable for fine-grained identification applications in complex scenes.
To realize tomato growth period monitoring and yield prediction of tomato cultivation, our study proposes a visual object tracking network called YOLO-deepsort to identify and count tomatoes in different growth periods. Based on the YOLOv5s model, our model uses shufflenetv2, combined with the CBAM attention mechanism, to compress the model size from the algorithm level. In the neck part of the network, the BiFPN multi-scale fusion structure is used to improve the prediction accuracy of the network. When the target detection network completes the bounding box prediction of the target, the Kalman filter algorithm is used to predict the target’s location in the next frame, which is called the tracker in this paper. Finally, calculate the bounding box error between the predicted bounding box and the bounding box output by the object detection network to update the parameters of the Kalman filter and repeat the above steps to achieve the target tracking of tomato fruits and flowers. After getting the tracking results, we use OpenCV to create a virtual count line to count the targets. Our algorithm achieved a competitive result based on the above methods: The mean average precision of flower, green tomato, and red tomato was 93.1%, 96.4%, and 97.9%. Moreover, we demonstrate the tracking ability of the model and the counting process by counting tomato flowers. Overall, the YOLO-deepsort model could fulfill the actual requirements of tomato yield forecast in the greenhouse scene, which provide theoretical support for crop growth status detection and yield forecast.
In modern agriculture and environmental protection, effective identification of crop diseases and pests is very important for intelligent management systems and mobile computing application. However, the existing identification mainly relies on machine learning and deep learning networks to carry out coarse-grained classification of large-scale parameters and complex structure fitting, which lacks the ability in identifying fine-grained features and inherent correlation to mine pests. To solve existing problems, a fine-grained pest identification method based on a graph pyramid attention, convolutional neural network (GPA-Net) is proposed to promote agricultural production efficiency. Firstly, the CSP backbone network is constructed to obtain rich feature maps. Then, a cross-stage trilinear attention module is constructed to extract the abundant fine-grained features of discrimination portions of pest objects as much as possible. Moreover, a multilevel pyramid structure is designed to learn multiscale spatial features and graphic relations to enhance the ability to recognize pests and diseases. Finally, comparative experiments executed on the cassava leaf, AI Challenger, and IP102 pest datasets demonstrates that the proposed GPA-Net achieves better performance than existing models, with accuracy up to 99.0%, 97.0%, and 56.9%, respectively, which is more conducive to distinguish crop pests and diseases in applications for practical smart agriculture and environmental protection.
Accurate identification of insect pests is the key to improve crop yield and ensure quality and safety. However, under the influence of environmental conditions, the same kind of pests show obvious differences in intraclass representation, while the different kinds of pests show slight similarities. The traditional methods have been difficult to deal with fine-grained identification of pests, and their practical deployment is low. In order to solve this problem, this paper uses a variety of equipment terminals in the agricultural Internet of Things to obtain a large number of pest images and proposes a fine-grained identification model of pests based on probability fusion network FPNT. This model designs a fine-grained feature extractor based on an optimized CSPNet backbone network, mining different levels of local feature expression that can distinguish subtle differences. After the integration of the NetVLAD aggregation layer, the gated probability fusion layer gives full play to the advantages of information complementarity and confidence coupling of multi-model fusion. The comparison test shows that the PFNT model has an average recognition accuracy of 93.18% for all kinds of pests, and its performance is better than other deep-learning methods, with the average processing time drop to 61 ms, which can meet the needs of fine-grained image recognition of pests in the Internet of Things in agricultural and forestry practice, and provide technical application reference for intelligent early warning and prevention of pests.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.