2014
DOI: 10.1016/j.robot.2014.03.012
|View full text |Cite
|
Sign up to set email alerts
|

Scale-aware navigation of a low-cost quadrocopter with a monocular camera

Abstract: We present a complete solution for the visual navigation of a small-scale, low-cost quadrocopter in unknown environments. Our approach relies solely on a monocular camera as the main sensor, and therefore does not need external tracking aids such as GPS or visual markers. Costly computations are carried out on an external laptop that communicates over wireless LAN with the quadrocopter. Our approach consists of three components: a monocular SLAM system, an extended Kalman filter for data fusion, and a PID cont… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

4
118
0
4

Year Published

2016
2016
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 185 publications
(126 citation statements)
references
References 23 publications
4
118
0
4
Order By: Relevance
“…Generated Pointcloudby PTAM However, the motion of the camera is not easily predictable, because of motion artifacts, robot dynamics uncertainly, friction, gearing backslash, etc. This nonlinear behavior makes the point cloud coordinates precise but not accurate, since the 3D points coordinates are scaled by an a-priori unknown factor λ [11]. In particular, we can model the PTAM measurements as Gaussian Random Variable with standard deviation σ P T AM and a mean λµ i , where µ i is the true position of the feature in the 3D space, and λ is an unknown scale factor.…”
Section: Computervisionmentioning
confidence: 99%
See 1 more Smart Citation
“…Generated Pointcloudby PTAM However, the motion of the camera is not easily predictable, because of motion artifacts, robot dynamics uncertainly, friction, gearing backslash, etc. This nonlinear behavior makes the point cloud coordinates precise but not accurate, since the 3D points coordinates are scaled by an a-priori unknown factor λ [11]. In particular, we can model the PTAM measurements as Gaussian Random Variable with standard deviation σ P T AM and a mean λµ i , where µ i is the true position of the feature in the 3D space, and λ is an unknown scale factor.…”
Section: Computervisionmentioning
confidence: 99%
“…It is possible to combine the output of the two methods by means of a Maximum-Likelihood Estimation method [11] in order to obtain an estimation of λ and correct the PTAM points' coordinates. This is equivalent to minimize the negative log-likelihood function for a given number n of acquisitions: …”
Section: Computervisionmentioning
confidence: 99%
“…This is most noted in their use of the AR.Drone as a platform for control algorithms and vision navigation research, as well as a tool for educational and outreach purposes. Their research mainly pertains to the use of monocular RGB-D cameras to implement SLAM for the purpose of scale aware autonomous navigation and dense visual odometry [9]. In their research the vision group at TUM have employed a PID controller successfully on the Parrot Drone [9].…”
Section: Introductionmentioning
confidence: 99%
“…Their research mainly pertains to the use of monocular RGB-D cameras to implement SLAM for the purpose of scale aware autonomous navigation and dense visual odometry [9]. In their research the vision group at TUM have employed a PID controller successfully on the Parrot Drone [9]. Control techniques similar to those demonstrated by TUM have been shown in Altu & Taylor [10], as well as Bristeau et al [11].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation