2015 International Conference on Unmanned Aircraft Systems (ICUAS) 2015
DOI: 10.1109/icuas.2015.7152419
|View full text |Cite
|
Sign up to set email alerts
|

Monocular vision-based autonomous navigation system on a toy quadcopter in unknown environments

Abstract: In this paper, we present an monocular visionbased autonomous navigation system for a commercial quadcoptor. The quadcoptor communicates with a ground-based laptop via wireless connection. The video stream of the front camera on the drone and the navigation data measured onboard are sent to the ground station and then processed by a vision-based SLAM system. In order to handle motion blur and frame lost in the received video, our SLAM system consists of a improved robust feature tracking scheme and a relocalis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…This approach uses feature extraction from images to perform navigation and depth extraction. Huang et al [34] proposed an accurate autonomous indoor navigation system for a quad copter with a monocular vision. They used information from multiple sensors from the drone as well as a vision-based Simultaneous Localization and Mapping (SLAM) system and Extended Kalman Filter (EKF) to achieve robust and accurate 3D position and velocity estimation.…”
Section: Related Workmentioning
confidence: 99%
“…This approach uses feature extraction from images to perform navigation and depth extraction. Huang et al [34] proposed an accurate autonomous indoor navigation system for a quad copter with a monocular vision. They used information from multiple sensors from the drone as well as a vision-based Simultaneous Localization and Mapping (SLAM) system and Extended Kalman Filter (EKF) to achieve robust and accurate 3D position and velocity estimation.…”
Section: Related Workmentioning
confidence: 99%
“…Second, a single camera provides 2D information only; thus, scale estimation is required to estimate the 3D pose. 8 Henceforward, the current research challenge is related to the development of the advanced robust nonlinear filtering approaches for multi-sensor fusion approach. This approach is based on nonlinear HN filter and fuzzy adaptive parameters tuning for resolving the autonomous unmanned vehicle localization based on inertial measurements, GPS and monocular-vision data fusion.…”
Section: Introductionmentioning
confidence: 99%
“…Second, a single camera provides 2D information only; thus, scale estimation is required to estimate the 3D pose. 8…”
Section: Introductionmentioning
confidence: 99%
“…In this paper we explore a single-camera-based autonomous navigation and obstacle avoidance for MAVs in real environments. Traditional systems employ handcrafted sensing and control algorithms to allow navigation and has led to significant progress in this field [5], [6]. Recently, the success of deep neural networks have enticed researchers to study neuromorphic models of autonomous navigation [7]- [9].…”
Section: Introductionmentioning
confidence: 99%