Given the challenges in reducing greenhouse gases (GHG), one of the sectors that have attracted the most attention in the Sustainable Development Agenda 2030 (SDA-2030) is the agricultural sector. In this context, one of the crops that has had the most remarkable development worldwide has been oil-palm cultivation, thanks to its high productive potential and being one of the most efficient sources of palmitic acid production. However, despite the significant presence of oil palm in the food sector, oil-palm crops have not been exempt from criticism, as its cultivation has developed mainly in areas of ecological conservation around the world. This criticism has been extended to other crops in the context of the Sustainable Development Goals (SDG) due to insecticides and fertilisers required to treat phytosanitary events in the field. To reduce this problem, researchers have used unmanned aerial vehicles (UAVs) to capture multi-spectral aerial images (MAIs) to assess fields’ plant vigour and detect phytosanitary events early using vegetation indices (VIs). However, detecting phytosanitary events in the early stages still suggests a technological challenge. Thus, to improve the environmental and financial sustainability of oil-palm crops, this paper proposes a hybrid deep-learning model (stacked–convolutional) for risk characterisation derived from a phytosanitary event, as suggested by lethal wilt (LW). For this purpose, the proposed model integrates a Lagrangian dispersion model of the backward-Gaussian-puff-tracking type into its convolutional structure, which allows describing the evolution of LW in the field for stages before a temporal reference scenario. The results show that the proposed model allowed the characterisation of the risk derived from a phytosanitary event, (PE) such as lethal wilt (LW), in the field, promoting improvement in agricultural environmental and financial sustainability activities through the integration of financial-risk concepts. This improved risk management will lead to lower projected losses due to a natural reduction in insecticides and fertilisers, allowing a balance between development and sustainability for this type of crop from the RSPO standards.
This article presents an intelligent system using deep learning algorithms and the transfer learning approach to detect oil palm units in multispectral photographs taken with unmanned aerial vehicles. Two main contributions come from this piece of research. First, a dataset for oil palm units detection is carefully produced and made available online. Although being tailored to the palm detection problem, the latter has general validity and can be used for any classification application. Second, we designed and evaluated a state-of-the-art detection system, which uses a convolutional neural network to extract meaningful features, and a classifier trained with the images from the proposed dataset. Results show outstanding effectiveness with an accuracy peak of 99.5% and a precision of 99.8%. Using different images for validation taken from different altitudes the model reached an accuracy of 97.5% and a precision of 98.3%. Hence, the proposed approach is highly applicable in the field of precision agriculture.
Currently, research on gesture recognition systems has been on the rise due to the capabilities these systems provide to the field of human–machine interaction, however, gesture recognition in prosthesis and orthesis has been carried out through the use of an extensive amount of channels and electrodes to acquire the EMG (Electromyography) signals, increasing the cost and complexity of these systems. The scientific literature shows different approaches related to gesture recognition based on the analysis of EMG signals using deep learning models, highlighting the recurrent neural networks with deep learning structures. This paper presents the implementation of a Recurrent Neural Network (RNN) model using Long-short Term Memory (LSTM) units and dense layers to develop a gesture classifier for hand prosthesis control, aiming to decrease the number of EMG channels and the overall model complexity, in order to increase its scalability for embedded systems. The proposed model requires the use of only four EMG channels to recognize five hand gestures, greatly reducing the number of electrodes compared to other approaches found in the literature. The proposed model was trained using a dataset for each gesture EMG signals, which were recorded for 20 s using a custom EMG armband. The model reached an accuracy of to 99% for the training and validation stages, and an accuracy of 87 ± 7% during real-time testing. The results obtained by the proposed model establish a general methodology for the reduction of complexity in the recognition of gestures intended for human.machine interaction for different computational devices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.