2018
DOI: 10.1007/s11042-018-6525-0
|View full text |Cite
|
Sign up to set email alerts
|

A polar model for fast object tracking in 360-degree camera images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 33 publications
0
5
0
Order By: Relevance
“…In this way, it boosted the overall speed of the algorithm with no effect on tracking performance. Figure 11 shows a comparison of our method's speed performance with the performances presented by Ahmad et al [18]. These methods also presented simple online arbitrary object trackers by using the conventional approaches in their techniques for fast object tracking in 360 • videos.…”
Section: Speed Evaluationmentioning
confidence: 91%
See 1 more Smart Citation
“…In this way, it boosted the overall speed of the algorithm with no effect on tracking performance. Figure 11 shows a comparison of our method's speed performance with the performances presented by Ahmad et al [18]. These methods also presented simple online arbitrary object trackers by using the conventional approaches in their techniques for fast object tracking in 360 • videos.…”
Section: Speed Evaluationmentioning
confidence: 91%
“…However, the processing speed of their framework did not meet the requirement of being in real time. In 2018, Ahmad et al made refinements to their previous work presented in [16] and designed an enhanced polar model [18] for fast object tracking in polar sequences. Still, with a speed of 9 fps, they could not get much closer to real time.…”
Section: Related Workmentioning
confidence: 99%
“…In solving the filter L(x), it is described as a regularized least-squares objective function in the form of the tangent function as in (6), where l(x) is the filter associated with the x-th frame, l(x) n denotes the filter corresponding to each feature dimension, g(x) d is the feature map corresponding to each feature in the input candidate frame, and n � l, 2,..., N, N is the number of feature dimensions and takes the value of 10.…”
Section: Panoramic Video Multitarget Real-time Trackingmentioning
confidence: 99%
“…Currently, most tracking algorithms are trained, tested, and evaluated online with parameters based on publicly available datasets, and the target tracking models trained in this way show good tracking performance on certain datasets [5]. However, the ultimate task of target tracking algorithms is to deal with target tracking problems in real application scenarios, and the actual tracking environment is subject to unpredictable interference factors at any time, which leads to complex and variable tracking scenarios and eventually affects the tracking results of the algorithms [6]. Currently, target tracking in complex scenarios such as target occlusion, illumination change, deformation, low resolution, low illumination, and rotation may still cause tracking drift or loss due to interference in the target appearance representation model, weak model discrimination, or incorrect model update, and tracking accuracy and robustness still need to be further improved to continuously meet the needs of practical applications [7].…”
Section: Introductionmentioning
confidence: 99%
“…The first detects each person by an object detector, and the second associates the cross-frame identities by considering their appearance similarity and trajectory trend [7,8]. Most existing studies work on 2D/3D single-view [2,5], or, 2D panoramic multi-target tracking [9,10]. However, previous works have limitations: the 2D tracking results could not be directly used in some applications (i.e., robotics) since the real-world coordinate is 3D; it is easy to lose the tracking target in a 3D narrow-angle-view coordinate since it only covers a part of the surrounding environment.…”
Section: Introductionmentioning
confidence: 99%