This paper describes a light detection and ranging (LiDAR)-based autonomous navigation system for an ultralightweight ground robot in agricultural fields. The system is designed for reliable navigation under cluttered canopies using only a 2D Hokuyo UTM-30LX LiDAR sensor as the single source for perception. Its purpose is to ensure that the robot can navigate through rows of crops without damaging the plants in narrow row-based and high-leaf-cover semistructured crop plantations, such as corn (Zea mays) and sorghum (Sorghum bicolor). The key contribution of our work is a LiDAR-based navigation algorithm capable of rejecting outlying measurements in the point cloud due to plants in adjacent rows, low-hanging leaf cover or weeds. The algorithm addresses this challenge using a set of heuristics that are designed to filter out outlying measurements in a computationally efficient manner, and linear least squares are applied to estimate within-row distance using the filtered data.Moreover, a crucial step is the estimate validation, which is achieved through a heuristic that grades and validates the fitted row-lines based on current and previous information. The proposed LiDAR-based perception subsystem has been extensively tested in production/breeding corn and sorghum fields. In such variety of highly cluttered real field environments, the robot logged more than 6 km of autonomous run in straight rows. These results demonstrate highly promising advances to LiDARbased navigation in realistic field environments for small under-canopy robots.
We present a self-supervised approach for learning to predict traversable paths for wheeled mobile robots that require good traction to navigate. Our algorithm, termed WayFAST (Waypoint Free Autonomous Systems for Traversability), uses RGB and depth data, along with navigation experience, to autonomously generate traversable paths in outdoor unstructured environments.Our key inspiration is that traction can be estimated for rolling robots using kinodynamic models. Using traction estimates provided by an online receding horizon estimator, we are able to train a traversability prediction neural network in a self-supervised manner, without requiring heuristics utilized by previous methods. We demonstrate the effectiveness of WayFAST through extensive field testing in varying environments, ranging from sandy dry beaches to forest canopies and snow covered grass fields. Our results clearly demonstrate that WayFAST can learn to avoid geometric obstacles as well as untraversable terrain, such as snow, which would be difficult to avoid with sensors that provide only geometric data, such as LiDAR. Furthermore, we show that our training pipeline based on online traction estimates is more data-efficient than other heuristic-based methods.
This paper describes a system for visually guided autonomous navigation of under-canopy farm robots. Low-cost under-canopy robots can drive between crop rows under the plant canopy and accomplish tasks that are infeasible for overthe-canopy drones or larger agricultural equipment. However, autonomously navigating them under the canopy presents a number of challenges: unreliable GPS and LiDAR, high cost of sensing, challenging farm terrain, clutter due to leaves and weeds, and large variability in appearance over the season and across crop types. We address these challenges by building a modular system that leverages machine learning for robust and generalizable perception from monocular RGB images from low-cost cameras, and model predictive control for accurate control in challenging terrain. Our system, CropFollow, is able to autonomously drive 485 meters per intervention on average, outperforming a state-of-the-art LiDAR based system (286 meters per intervention) in extensive field testing spanning over 25 km. I. INTRODUCTIONThis paper describes the design of a visually-guided navigation system for compact, low-cost, under-canopy agricultural robots for commodity row-crops (corn, soybean, sugarcane etc), such as that shown in Figure 1. Our system, called CropFollow, uses monocular RGB images from an on-board front-facing camera to steer the robot to autonomously traverse in between crop rows in harsh, visually cluttered, uneven, and variable real-world agricultural fields. Robust and reliable autonomous navigation of such under-canopy robots has the potential to enable a number of practical and scientific applications: High-throughput plant phenotyping [43,37,68,66,58,25], ultra-precise pesticide treatments, mechanical weeding [41], plant manipulation [17,61], and cover crop planting [64,62] Such applications are not possible with overcanopy larger tractors and UAVs, and are crucial for increasing agricultural sustainability [55,22].Autonomous row-following is a foundational capability for robots that need to navigate between crop rows in agricultural fields. Such robots cannot rely on RTK (Real-Time Kinematic)-GPS [21] based methods which are used for overthe-canopy autonomy (e.g. for drones, tractors, and combine Project website with data and videos: https://ansivakumar.github.io/ learned-visual-navigation/. Correspondence to {av7,girishc}@illinois.edu. * Girish Chowdhary and Saurabh Gupta contributed equally and are listed alphabetically.
Small robotic vehicles have been navigating agricultural fields in the pursuit of new possibilities to increase agricultural production and to meet the increasing food and energetic demands. However, a perception system with reliable awareness of the surroundings remains a challenge to achieve autonomous navigation. Camera and single-layer laser scanners have been the primary sources of information, yet the first suffers from outdoor light sensibility and both from occlusion by leaves. This paper describes a three-dimensional system acquisition for corn crops. The sensing core is a single-layer UTM30-LX laser scanner rotating around its axis, while an inertial sensor provides angular measurements. With the rotation, multiple layers are used to compose a 3D point cloud, which is represented by a two-dimensional occupancy grid. Each cell is filled according to the number of readings, and their weights derive from two procedures: firstly, a mask enhances vertical entities (stalks); secondly, two Gaussian functions on the expected position of the immediate neighboring rows weaken readings in the middle of the lane and farther rows. The resulting occupancy grid allows the representation of the cornrows by virtual walls, which are used as references to a wall follower algorithm. According to experimental results, the virtual walls are segmented with reduced influence from straying leaves and sparse weeds when compared to the segmentation done with single-layer laser scanner data. Indeed, 64.02% of 3D outputs are within 0.05 m limit error from expected lane width, while only 11.63% of single-layer laser data are within same limit.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.