2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01846
|View full text |Cite
|
Sign up to set email alerts
|

Opening up Open World Tracking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 30 publications
(12 citation statements)
references
References 61 publications
0
12
0
Order By: Relevance
“…With the maturation of tracking and VIS tasks, recently introduced datasets have started to emphasize more challenging scenarios like tracking of objects of unknown category and tracking through heavy occlusions. The TAO-OW dataset [31] builds an open-world MOT benchmark on top of the longtailed TAO dataset [15] by introducing a set of "unknown" class labels which are not included in the training set. Both TAO and TAO-OW are not exhaustively labeled.…”
Section: Related Workmentioning
confidence: 99%
“…With the maturation of tracking and VIS tasks, recently introduced datasets have started to emphasize more challenging scenarios like tracking of objects of unknown category and tracking through heavy occlusions. The TAO-OW dataset [31] builds an open-world MOT benchmark on top of the longtailed TAO dataset [15] by introducing a set of "unknown" class labels which are not included in the training set. Both TAO and TAO-OW are not exhaustively labeled.…”
Section: Related Workmentioning
confidence: 99%
“…Other works have used class agnostic localizers [11,51] to perform MOT on arbitrary objects. Recently, Liu et al [39] defined open-world tracking, a task that focuses on the evaluation of previously unseen objects. In particular, it requires any-object tracking as a stage that precedes object classification.…”
Section: Deploymentmentioning
confidence: 99%
“…Existing works perform scene segmentation and class agnostic tracking before classification [46,48,50] or utilize class-agnostic proposal generation [11,51], similar to open-world detection methods. Liu et al [39] propose an open-world tracking benchmark, TAO-OW, that evaluates class-agnostic tracking as a task that precedes classification. However, this comes with the limitation that the evaluation only captures tracker recall and no classification accuracy.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…These include statistic-based methods [71,65], visual similarity-based clustering methods [29,26,40], linkage analysis [42] with appearance and geometric consistency [15,84,85,86], visual saliency [105,39], and unsupervised feature learning using deep generative models [44,72,63,3]. In contrast, unsupervised object detection from LiDAR sequences is fairly under-explored [18,94,78,48]. [18,57] proposed to sequentially update the detections and perform tracking based on motion cues from 3D LiDAR scans.…”
Section: Related Workmentioning
confidence: 99%