2021
DOI: 10.1109/jsen.2021.3100588
|View full text |Cite
|
Sign up to set email alerts
|

Novel Deep-Learning-Aided Multimodal Target Tracking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(15 citation statements)
references
References 30 publications
0
15
0
Order By: Relevance
“…1 + l = l 21: End Output: The optimal location is the food source j F FIGURE 3 The pseudo-code of the improved salp swarm algorithm value is found through the similarity function, that is the target.…”
Section: Visual Tracking Based On Improved Salp Salp Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…1 + l = l 21: End Output: The optimal location is the food source j F FIGURE 3 The pseudo-code of the improved salp swarm algorithm value is found through the similarity function, that is the target.…”
Section: Visual Tracking Based On Improved Salp Salp Algorithmmentioning
confidence: 99%
“…Deep learning mostly uses the general target feature expression extracted after off-line training with non-tracking data set for online target tracking. Tracking algorithms based on CF [1,2] and deep learning [3,4] have been successfully put forward one after another, which makes a big breakthrough in target tracking.…”
Section: Introductionmentioning
confidence: 99%
“…Considering the computer vision detection technology based on deep learning, it can be influenced by external factors in practical applications, making the detection results biased. e paper applies computer vision principles and introduces attention mechanisms in conventional deep learning networks [7] to establish the multitarget detection framework shown in Figure 1 for feature extraction, feature pooling, and classification regression of camera acquisition images.…”
Section: Computer Vision Multitarget Detection Algorithmsmentioning
confidence: 99%
“…In the formula, L denotes the left camera, A denotes the right camera, χ 1 denotes the projection matrix of the left camera, and χ 2 denotes the projection matrix of the right camera. Based on the left and right camera perspective projection matrix transformation relationships shown in (7) and ( 8), the spatial coordinates of the target can be deduced from the known image point coordinates and the distance measurement results can be obtained by comparing the coordinate information. It is important to note that camera images in complex environments can contain a lot of noise, which can affect the accuracy of the distance measurement results.…”
Section: Target Distance Measurement Programmementioning
confidence: 99%
“…In recent years, quadrotor unmanned aerial vehicles (UAVs) have been frequently used in applications related to the Internet of Things [1] or to supplant human efforts in hazardous environments such as disaster sites [2], [3]. As a result, the problem of accurate and reliable quadrotor localization, as an essential requirement for successful mission performance, has received great attention [4]- [6]. The Global Positioning System (GPS) is one of the most representative technologies for location estimation [7].…”
Section: Introductionmentioning
confidence: 99%