2021
DOI: 10.3390/electronics10161931
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Target Tracking and Ingressing of a Small UAV Using Monocular Sensor Based on the Geometric Constraints

Abstract: In many applications of airborne visual techniques for unmanned aerial vehicles (UAVs), lightweight sensors and efficient visual positioning and tracking algorithms are essential in a GNSS-denied environment. Meanwhile, many tasks require the ability of recognition, localization, avoiding, or flying pass through these dynamic obstacles. In this paper, for a small UAV equipped with a lightweight monocular sensor, a single-frame parallel-features positioning method (SPPM) is proposed and verified for a real-time… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 48 publications
0
10
0
Order By: Relevance
“…The current works of depth estimation based on neural networks are still not mature enough, with a problem of a large amount of calculation and poor real-time performance, such as CNN-based monocular depth-estimation methods [20], multi-scale networks for DepthMap estimation methods [21], the method of using the Res-Net to solve the depth estimation [22], and the method of using the combination of CNN and a graphical model [23]. Considering the real-time problem, accurate positioning and stable tracking of 3D-space targets were achieved in this study based on the author's previous research work and geometrically constrained spatial-positioning method [10]. Our method considers the computational burden and real-time issues while ensuring the tracking accuracy.…”
Section: Depth Estimation In Monocular Visionmentioning
confidence: 99%
See 2 more Smart Citations
“…The current works of depth estimation based on neural networks are still not mature enough, with a problem of a large amount of calculation and poor real-time performance, such as CNN-based monocular depth-estimation methods [20], multi-scale networks for DepthMap estimation methods [21], the method of using the Res-Net to solve the depth estimation [22], and the method of using the combination of CNN and a graphical model [23]. Considering the real-time problem, accurate positioning and stable tracking of 3D-space targets were achieved in this study based on the author's previous research work and geometrically constrained spatial-positioning method [10]. Our method considers the computational burden and real-time issues while ensuring the tracking accuracy.…”
Section: Depth Estimation In Monocular Visionmentioning
confidence: 99%
“…This significantly reduces the computational burden of simultaneously calculating feature channels of all scales. Finally, a target-detection and tracking algorithm framework based on the FDA-SSD monocular sensor platform was established in the combination with the author's previous target-location method based on geometric constraints [10]. Specifically, an efficient and robust geometric constraint equation method was taken to solve the target space; the normalized depth information of the target in the current frame was fed back to the FDA-SSD of the next frame to select a detector at the matching scale for target detection.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The target recognition and feature baseline extraction in Level 1 can refer to the previous work of the author [20]. The extraction method of the orb in Level 3 has been referred to by a large number of related studies.…”
Section: Multi-level Feature Extractionmentioning
confidence: 99%
“…For Level 1, the feature baseline is abstracted as the size feature of the target being tracked, such as the edge length, radius, etc., the extraction method is referred to our previous work [20]. The following focuses on feature baseline extraction methods in Levels 2 and 3.…”
Section: Feature Baseline Extractionmentioning
confidence: 99%