This paper describes a light detection and ranging (LiDAR)-based autonomous navigation system for an ultralightweight ground robot in agricultural fields. The system is designed for reliable navigation under cluttered canopies using only a 2D Hokuyo UTM-30LX LiDAR sensor as the single source for perception. Its purpose is to ensure that the robot can navigate through rows of crops without damaging the plants in narrow row-based and high-leaf-cover semistructured crop plantations, such as corn (Zea mays) and sorghum (Sorghum bicolor). The key contribution of our work is a LiDAR-based navigation algorithm capable of rejecting outlying measurements in the point cloud due to plants in adjacent rows, low-hanging leaf cover or weeds. The algorithm addresses this challenge using a set of heuristics that are designed to filter out outlying measurements in a computationally efficient manner, and linear least squares are applied to estimate within-row distance using the filtered data.Moreover, a crucial step is the estimate validation, which is achieved through a heuristic that grades and validates the fitted row-lines based on current and previous information. The proposed LiDAR-based perception subsystem has been extensively tested in production/breeding corn and sorghum fields. In such variety of highly cluttered real field environments, the robot logged more than 6 km of autonomous run in straight rows. These results demonstrate highly promising advances to LiDARbased navigation in realistic field environments for small under-canopy robots.
We present a self-supervised approach for learning to predict traversable paths for wheeled mobile robots that require good traction to navigate. Our algorithm, termed WayFAST (Waypoint Free Autonomous Systems for Traversability), uses RGB and depth data, along with navigation experience, to autonomously generate traversable paths in outdoor unstructured environments.Our key inspiration is that traction can be estimated for rolling robots using kinodynamic models. Using traction estimates provided by an online receding horizon estimator, we are able to train a traversability prediction neural network in a self-supervised manner, without requiring heuristics utilized by previous methods. We demonstrate the effectiveness of WayFAST through extensive field testing in varying environments, ranging from sandy dry beaches to forest canopies and snow covered grass fields. Our results clearly demonstrate that WayFAST can learn to avoid geometric obstacles as well as untraversable terrain, such as snow, which would be difficult to avoid with sensors that provide only geometric data, such as LiDAR. Furthermore, we show that our training pipeline based on online traction estimates is more data-efficient than other heuristic-based methods.
Small robotic vehicles have been navigating agricultural fields in the pursuit of new possibilities to increase agricultural production and to meet the increasing food and energetic demands. However, a perception system with reliable awareness of the surroundings remains a challenge to achieve autonomous navigation. Camera and single-layer laser scanners have been the primary sources of information, yet the first suffers from outdoor light sensibility and both from occlusion by leaves. This paper describes a three-dimensional system acquisition for corn crops. The sensing core is a single-layer UTM30-LX laser scanner rotating around its axis, while an inertial sensor provides angular measurements. With the rotation, multiple layers are used to compose a 3D point cloud, which is represented by a two-dimensional occupancy grid. Each cell is filled according to the number of readings, and their weights derive from two procedures: firstly, a mask enhances vertical entities (stalks); secondly, two Gaussian functions on the expected position of the immediate neighboring rows weaken readings in the middle of the lane and farther rows. The resulting occupancy grid allows the representation of the cornrows by virtual walls, which are used as references to a wall follower algorithm. According to experimental results, the virtual walls are segmented with reduced influence from straying leaves and sparse weeds when compared to the segmentation done with single-layer laser scanner data. Indeed, 64.02% of 3D outputs are within 0.05 m limit error from expected lane width, while only 11.63% of single-layer laser data are within same limit.
In digital farming, the use of technology to increase agricultural production through automated tasks has recently integrated the development of AgBots for more reliable data collection using autonomous navigation. These AgBots are equipped with various sensors such as GNSS, cameras, and LiDAR, but these sensors can be prone to limitations such as low accuracy for under-canopy navigation with GNSS, sensitivity to outdoor lighting and platform vibration with cameras, and LiDAR occlusion issues. In order to address these limitations and ensure robust autonomous navigation, this paper presents a sensor selection methodology based on the identification of environmental conditions using sensor data. Through the extraction of features from GNSS, images, and point clouds, we are able to determine the feasibility of using each sensor and create a selection vector indicating its viability. Our results demonstrate that the proposed methodology effectively selects between the use of cameras or LiDAR within crops and GNSS outside of crops, at least 87% of the time. The main problem found is that, in the transition from inside to outside and from outside to inside the crop, GNSS features take 20 s to adapt. We compare a variety of classification algorithms in terms of performance and computational cost and the results show that our method has higher performance and lower computational cost. Overall, this methodology allows for the low-cost selection of the most suitable sensor for a given agricultural environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.