GPS‐denied closed‐loop autonomous control of unstable Unmanned Aerial Vehicles (UAVs) such as rotorcraft using information from a monocular camera has been an open problem. Most proposed Vision aided Inertial Navigation Systems (V‐INSs) have been too computationally intensive or do not have sufficient integrity for closed‐loop flight. We provide an affirmative answer to the question of whether V‐INSs can be used to sustain prolonged real‐world GPS‐denied flight by presenting a V‐INS that is validated through autonomous flight‐tests over prolonged closed‐loop dynamic operation in both indoor and outdoor GPS‐denied environments with two rotorcraft unmanned aircraft systems (UASs). The architecture efficiently combines visual feature information from a monocular camera with measurements from inertial sensors. Inertial measurements are used to predict frame‐to‐frame transition of online selected feature locations, and the difference between predicted and observed feature locations is used to bind in real‐time the inertial measurement unit drift, estimate its bias, and account for initial misalignment errors. A novel algorithm to manage a library of features online is presented that can add or remove features based on a measure of relative confidence in each feature location. The resulting V‐INS is sufficiently efficient and reliable to enable real‐time implementation on resource‐constrained aerial vehicles. The presented algorithms are validated on multiple platforms in real‐world conditions: through a 16‐min flight test, including an autonomous landing, of a 66 kg rotorcraft UAV operating in an unconctrolled outdoor environment without using GPS and through a Micro‐UAV operating in a cluttered, unmapped, and gusty indoor environment. © 2013 Wiley Periodicals, Inc.
An unmanned aerial vehicle requires adequate knowledge of its surroundings in order to operate in close proximity to obstacles. UAVs also have strict payload and power constraints which limit the number and variety of sensors available to gather this information. It is desirable, therefore, to enable a UAV to gather information about potential obstacles or interesting landmarks using common and lightweight sensor systems. This paper presents a method of fast terrain mapping with a monocular camera. Features are extracted from camera images and used to update a sequential extended Kalman filter. The features locations are parameterized in inverse depth to enable fast depth convergence. Converged features are added to a persistent terrain map which can be used for obstacle avoidance and additional vehicle guidance. Simulation results and results from recorded flight test data are presented to validate the algorithm.
This paper describes the target detection and tracking architecture used by the Georgia Tech Aerial Robotics team for the American Helicopter Society (AHS) Micro Aerial Vehicle (MAV) challenge. The vision system described enables visionaided navigation with additional abilities such as target detection and tracking all performed onboard the vehicles computer. The author suggests a robust target tracking method that does not solely depend on the image obtained from a camera, but also utilizes the other sensor outputs and runs a target location estimator. The machine learning based target identification method uses Haar-like classifiers to extract the target candidate points. The raw measurements are plugged into multiple Extended Kalman Filters (EKFs). The statistical test (Z-test) is used to bound the measurement, and solve the corresponding problem. Using Multiple EKFs allows us not only to optimally estimate the target location, but also to use the information as one of the criteria to evaluate the tracking performance. The MAV utilizes performance-based criteria that determine whether or not to initiate a maneuver such as hover or land over/on the target. The performance criteria are closed in the loop which allows the system to determine at any time whether or not to continue with the maneuver. For Vision-aided Inertial Navigation System (V-INS), a corner Harris algorithm finds the feature points, and we track them using the statistical knowledge. The feature point locations are integrated in Bierman Thornton extended Kalman Filter (BTEKF) with Inertial Measurement Unit (IMU) and sonar sensor outputs to generate vehicle states: position, velocity, attitude, accelerometer and gyroscope biases. A 6degrees-of-freedom quadrotor flight simulator is developed to test the suggested method. This paper provides the simulation results of the vision-based maneuvers: hovering over the target, and landing on the target. In addition to the simulation results, flight tests have been conducted to show and validate the system performance. The 500 gram Georgia Tech Quadrotor (GTQ)-Mini, was used for the flight tests. All processing is done onboard the vehicle and it is able to operate without human interaction. Both of the simulation and flight test results show the effectiveness of the suggested method. This system and vehicle were used for the AHS 2015 MAV Student Challenge where the GPS-denied closed-loop target search is required. The vehicle successfully found the ground target, and landed on the desired location. This paper shares the data obtained from the competition.
An unmanned aerial vehicle requires adequate knowledge of its surroundings in order to operate in close proximity to obstacles. UAVs also have strict payload and power constraints which limit the number and variety of sensors available to gather this information. It is desirable, therefore, to enable a UAV to gather information about potential obstacles or interesting landmarks using common and lightweight sensor systems. This paper presents a method of fast terrain mapping with a monocular camera. Features are extracted from camera images and used to update a sequential extended Kalman filter. The features locations are parameterized in inverse depth to enable fast depth convergence. Converged features are added to a persistent terrain map which can be used for obstacle avoidance and additional vehicle guidance. Simulation results and results from recorded flight test data are presented to validate the algorithm.
As unmanned aerial vehicles are used in more environments, flexible navigation strategies are required to ensure safe and reliable operation. Operation in the presence of degraded or denied GPS signal is critical in many environments, particularly indoors, in urban canyons, and hostile areas. Two techniques, laser-based simultaneous localization and mapping (SLAM) and monocular visual SLAM, in conjunction with inertial navigation, have attracted considerable attention in the research community. This paper presents an integrated navigation system combining both visual SLAM and laser SLAM with an EKF-based inertial navigation system. The monocular visual SLAM system has fully correlated vehicle and feature states. The laser SLAM system is based on a Monte Carlo scan-to-map matching, and leverages the visual data to reduce ambiguities in the pose matching. The system is validated in full 6 degree of freedom simulation, and in flight test. A key feature of the work is that the system is validated with a controller in the navigation loop.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.