Autonomous harvesting is becoming an important challenge and necessity in agriculture, because of the lack of labour and the growth of population needing to be fed. Perception is a key aspect of autonomous harvesting and is very challenging due to difficult lighting conditions, limited sensing technologies, occlusions, plant growth, etc. 3D vision approaches can bring several benefits addressing the aforementioned challenges such as localisation, size estimation, occlusion handling and shape analysis. In this paper, we propose a novel approach using 3D information for detecting broccoli heads based on Convolutional Neural Networks (CNNs), exploiting the organised nature of the point clouds originating from the RGBD sensors. The proposed algorithm, tested on real-world datasets, achieves better performances than the state-of-the-art, with better accuracy and generalisation in unseen scenarios, whilst significantly reducing inference time, making it better suited for realtime in-field applications.
Real-time 3D perception of the environment is crucial for the adoption and deployment of reliable autonomous harvesting robots in agriculture. Using data collected with RGB-D cameras under farm field conditions, we present two methods for processing 3D data that reliably detect mature broccoli heads. The proposed systems are efficient and enable real-time detection on depth data of broccoli crops using the organised structure of the point clouds delivered by a depth sensor. The systems are tested with datasets of two broccoli varieties collected in planted fields from two different countries.Our evaluation shows the new methods outperform state-of-theart approaches for broccoli detection based on both 2D visionbased segmentation techniques and depth clustering using the Euclidean proximity of neighbouring points. The results show the systems are capable of accurately detecting the 3D locations of broccoli heads relative to the vehicle at high frame rates.
This work explored the requirements of accurately and reliably predicting user intention using a deep learning methodology when performing fine-grained movements of the human hand. The focus was on combining a feature engineering process with the effective capability of deep learning to further identify salient characteristics from a biological input signal. 3 time domain features (root mean square, waveform length, and slope sign changes) were extracted from the surface electromyography (sEMG) signal of 17 hand and wrist movements performed by 40 subjects. The feature data was mapped to 6 sensor bend resistance readings from a CyberGlove II system, representing the associated hand kinematic data. These sensors were located at specific joints of interest on the human hand (the thumb's metacarpophalangeal joint, the proximal interphalangeal joint of each finger, and the radiocarpal joint of the wrist). All datasets were taken from database 2 of the NinaPro online database repository. A 3-layer long short-term memory model with dropout was developed to predict the 6 glove sensor readings using a corresponding sEMG feature vector as input. Initial results from trials using test data from the 40 subjects produce an average mean squared error of 0.176. This indicates a viable pathway to follow for this prediction method of hand movement data, although further work is needed to optimize the model and to analyze the data with a more detailed set of metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.