2016
DOI: 10.3390/s16111844
|View full text |Cite
|
Sign up to set email alerts
|

Towards Autonomous Modular UAV Missions: The Detection, Geo-Location and Landing Paradigm

Abstract: Nowadays, various unmanned aerial vehicle (UAV) applications become increasingly demanding since they require real-time, autonomous and intelligent functions. Towards this end, in the present study, a fully autonomous UAV scenario is implemented, including the tasks of area scanning, target recognition, geo-location, monitoring, following and finally landing on a high speed moving platform. The underlying methodology includes AprilTag target identification through Graphics Processing Unit (GPU) parallelized pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(29 citation statements)
references
References 33 publications
0
29
0
Order By: Relevance
“…We implemented our algorithm on an onboard system that has a 32-bit 800-MHz ARM Cortex-A9 central processing unit (CPU) [ 64 ], 512 MB RAM, 1.5 GB flash memory, and a Linux kernel (version 3.12.10). In previous studies [ 33 , 65 , 66 ], they used a sophisticated tracking algorithm with a high-end embedded system such as an NVIDIA Jetson TK1 developer kit, including an ARM Cortex-A15 CPU (higher than 1 GHz [ 64 ]) and GPU [ 37 ], or an Intel NUC board with a 3.4 GHz CPU [ 67 ]. In particular, in [ 33 , 66 ], parallel processing is possible using a GPU, but our system does not include a GPU, which makes it difficult to utilize parallel processing.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We implemented our algorithm on an onboard system that has a 32-bit 800-MHz ARM Cortex-A9 central processing unit (CPU) [ 64 ], 512 MB RAM, 1.5 GB flash memory, and a Linux kernel (version 3.12.10). In previous studies [ 33 , 65 , 66 ], they used a sophisticated tracking algorithm with a high-end embedded system such as an NVIDIA Jetson TK1 developer kit, including an ARM Cortex-A15 CPU (higher than 1 GHz [ 64 ]) and GPU [ 37 ], or an Intel NUC board with a 3.4 GHz CPU [ 67 ]. In particular, in [ 33 , 66 ], parallel processing is possible using a GPU, but our system does not include a GPU, which makes it difficult to utilize parallel processing.…”
Section: Resultsmentioning
confidence: 99%
“…Recently, many studies [ 30 , 31 ] used AprilTag [ 32 ] as a landing target owning to its high-contrast, and two-dimensional (2D) tags are designed to be robust to low image resolution, occlusions, rotations and lighting variation. Kyristsis et al [ 33 ] used AprilTags C++ Library [ 34 ] along with the OpenCV4Tegra framework [ 35 ], which allows the performance of all OpenCV functions in parallel as graphics processing unit (GPU) functions and finally achieved the detection rate of 26–31 fps with the help of the global navigation satellite system (GNSS). The hardware that they used was quite powerful, and they employed a DJI Matrice UAV [ 36 ] along with an NVIDIA Tegra K1 SOC embedded processor [ 37 ].…”
Section: Related Workmentioning
confidence: 99%
“…In terms of the vision-based landing scheme, several patterns are designed as markers to tackle the close range and nighttime detection problem during UAV descent. [15][16][17][18] Moreover, while landing on a moving target, schemes either optimizing the marker detection rate 19 or exploiting the moving target's dynamic model were developed accordingly. 20 However, the performance of the aforementioned schemes mainly relies on the specific target pattern and is unlikely to be applied in an unvisited environment where there is no chance to set a welldefined landing guide in advance.…”
Section: Introductionmentioning
confidence: 99%
“…The image depth may be used to predict the distance to the obstacle [ 16 ] and, in some cases, can be shared between several unmanned vehicles [ 17 ]. Using these camera-based information methods, even auto-landings can be implemented [ 18 ]. However, in any case, the camera placement position and all angles should be calculated [ 19 ].…”
Section: Introductionmentioning
confidence: 99%