2006
DOI: 10.1109/mra.2006.1678139
|View full text |Cite
|
Sign up to set email alerts
|

Vision-based multi-UAV position estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0
1

Year Published

2010
2010
2017
2017

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 62 publications
(36 citation statements)
references
References 22 publications
0
35
0
1
Order By: Relevance
“…However, recently, both WSNs and VSNs have begun to be studied together [3], [4]. Several survey papers deal with the overall architecture, characteristics and research directions of WVSNs with respect to topology control, event-driven operation, image processing, communication and networking and energy management [5]- [10].…”
Section: A Related Workmentioning
confidence: 99%
“…However, recently, both WSNs and VSNs have begun to be studied together [3], [4]. Several survey papers deal with the overall architecture, characteristics and research directions of WVSNs with respect to topology control, event-driven operation, image processing, communication and networking and energy management [5]- [10].…”
Section: A Related Workmentioning
confidence: 99%
“…The main drawback of this method is the continuous accumulation of displacement errors for which no measure of uncertainty is provided. For a group of UAVs [12], a homography-based method is presented in which the observations of the common scene enables the robots to estimate their relative poses and localize with respect to a common frame of reference. Unfortunately, the planar scene assumption is unsuitable for many real-world scenarios (e.g., when flying near the ground or indoors).…”
Section: B Visual Odometry and Vision-aided Inertial Navigationmentioning
confidence: 99%
“…This builds a local map of the environment and transmits it to other robots in order to combine them and generate a more complete and accurate model for the whole team [3,17].…”
Section: Ralmentioning
confidence: 99%
“…(v) HERO2 detected the fire [20] and geolocalized it [3,17] computing the GPS coordinates of the fire. Then, HERO2 generated a new task (Extinguish(E1)) and inserted it in the multi-robot negotiation process.…”
Section: Description Of the Demonstration In The 'Alamillo' Parkmentioning
confidence: 99%