2019
DOI: 10.3390/s19214794
|View full text |Cite
|
Sign up to set email alerts
|

Vision-Based Multirotor Following Using Synthetic Learning Techniques

Abstract: Deep- and reinforcement-learning techniques have increasingly required large sets of real data to achieve stable convergence and generalization, in the context of image-recognition, object-detection or motion-control strategies. On this subject, the research community lacks robust approaches to overcome unavailable real-world extensive data by means of realistic synthetic-information and domain-adaptation techniques. In this work, synthetic-learning strategies have been used for the vision-based autonomous fol… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 53 publications
0
3
0
Order By: Relevance
“…Remark 1: Assumption 1 is reasonable since most common practical tracking objects satisfy such an assumption, e.g., rigid-body, cars, standing pedestrian. Note that Assumption 2 is only needed in the initialization stage instead of being required in the whole tracking process as considered in [9] and [28]. Therefore, Assumption 2 can be more easily guaranteed and more applicable to the practical tracking assignment (e.g., slope ground, object moving up and moving down during tracking).…”
Section: A Relative Position Estimationmentioning
confidence: 99%
“…Remark 1: Assumption 1 is reasonable since most common practical tracking objects satisfy such an assumption, e.g., rigid-body, cars, standing pedestrian. Note that Assumption 2 is only needed in the initialization stage instead of being required in the whole tracking process as considered in [9] and [28]. Therefore, Assumption 2 can be more easily guaranteed and more applicable to the practical tracking assignment (e.g., slope ground, object moving up and moving down during tracking).…”
Section: A Relative Position Estimationmentioning
confidence: 99%
“…Moreover, drones with high maneuverability in small spaces fits both urban missions [ 14 , 15 , 16 , 17 , 18 ], such as monitoring road traffic, and rescue missions [ 19 , 20 , 21 , 22 , 23 , 24 , 25 ], such as patrolling dangerous zone after natural disasters. In many of these applications Vision-Based Navigation (VBN) algorithms are often used to improve accuracy in the positioning [ 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 ]. Film makers and video studios are successfully using UAVs for aerial shooting [ 35 , 36 ].…”
Section: Introductionmentioning
confidence: 99%
“…The specific contribution is to provide a detailed set of benchmarks of the performance of these methods in a wide variety of flight patterns and under different test conditions, quantifying the ranging performance using an optical motion-capture system installed in our flight arena and making a recommendation about the choice of algorithm based on these results. While other studies have been published regarding testing and benchmarking of vision-based UAV detection and ranging, for instance [9][10][11], our study is unique in combining a broad choice of object detection algorithms (five candidates), having access to exact ground truth provided by an indoor motion capture system, and employing the commercial Parrot AR.Drone 2.0 UAV which brings about the challenges of its difficult-to-spot frontal profile due to its protective styrofoam hull and the low-resolution video from its onboard camera. We have chosen to focus exclusively on the case of the camera being carried onboard the pursuer UAV, as opposed to one or more camera on the ground, since this aligns with our lab's focus on vision-based UAV-to-UAV pursuit as a multi-faceted research program blending techniques from computer vision, state estimation, and control systems.…”
Section: Introductionmentioning
confidence: 99%