2020
DOI: 10.1109/access.2020.2975912
|View full text |Cite
|
Sign up to set email alerts
|

OneShotDA: Online Multi-Object Tracker With One-Shot-Learning-Based Data Association

Abstract: Tracking multiple objects in a video sequence can be accomplished by identifying the objects appearing in the sequence and distinguishing between them. Therefore, many recent multi-object tracking (MOT) methods have utilized re-identification and distance metric learning to distinguish between objects by computing the similarity/dissimilarity scores. However, it is difficult to generalize such approaches for arbitrary video sequences, because some important information, such as the number of objects (classes) … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(18 citation statements)
references
References 35 publications
0
18
0
Order By: Relevance
“…We restrict our evaluation to only those methods that are published in peer reviewed journals and conferences. We evaluate 37 different trackers (Brasó and Leal-Taixé 2020;Wang et al 2019;Bergmann et al 2019;Sheng et al 2018;Maksai and Fua 2019;Yoon et al 2020;Zhu et al 2018;Keuper et al 2018;Chen et al 2017Chen et al , 2019Xu et al 2019;Henschel et al 2018Henschel et al , 2019Long et al 2018;Kim et al 2015Kim et al , 2018Yoon et al 2018;Fu et al 2018Fu et al , 2019Chu and Ling 2019;Liu et al 2019;Song et al 2019;Karunasekera et al 2019;Babaee et al 2019;Cavallaro 2016, 2019;Bewley et al 2016;Bochinski et al 2017;Baisa 2018;Song and Jeon 2016;Baisa 2019;Eiselein et al 2012;Kutschbach et al 2017; Baisa and Wallace 2019) on MOT17 (Milan et al 2016). This includes all of the trackers for which the relevant bibliographic information was available when this analysis was performed on the 1st April 2020.…”
Section: Evaluating Trackers With Hota On Motchallengementioning
confidence: 99%
See 1 more Smart Citation
“…We restrict our evaluation to only those methods that are published in peer reviewed journals and conferences. We evaluate 37 different trackers (Brasó and Leal-Taixé 2020;Wang et al 2019;Bergmann et al 2019;Sheng et al 2018;Maksai and Fua 2019;Yoon et al 2020;Zhu et al 2018;Keuper et al 2018;Chen et al 2017Chen et al , 2019Xu et al 2019;Henschel et al 2018Henschel et al , 2019Long et al 2018;Kim et al 2015Kim et al , 2018Yoon et al 2018;Fu et al 2018Fu et al , 2019Chu and Ling 2019;Liu et al 2019;Song et al 2019;Karunasekera et al 2019;Babaee et al 2019;Cavallaro 2016, 2019;Bewley et al 2016;Bochinski et al 2017;Baisa 2018;Song and Jeon 2016;Baisa 2019;Eiselein et al 2012;Kutschbach et al 2017; Baisa and Wallace 2019) on MOT17 (Milan et al 2016). This includes all of the trackers for which the relevant bibliographic information was available when this analysis was performed on the 1st April 2020.…”
Section: Evaluating Trackers With Hota On Motchallengementioning
confidence: 99%
“…2018 ) 4 7 ( 3) 7 ( 3) 43.6 51.8 54.7 43.6 44 46.9 73.4 49.3 69.8 SAS_MOT17 (Maksai and Fua 2019 ) 5 31 ( 26) 3 ( 2) 43 44.2 57.2 37.5 49.6 40 72.8 53.2 75.8 YOONKJ17 (Yoon et al. 2020 ) 6 8 ( 2) 10 ( 4) 42.9 51.4 54 43 43.1 46 74.1 48 70.4 DMAN (Zhu et al. 2018 ) 7 21 ( 14) 5 ( 2) 42.7 48.2 55.7 39.9 46 42.4 73.2 49.6 72.7 jCC (Keuper et al.…”
Section: Analysing Previous Evaluation Metricsmentioning
confidence: 99%
“…In the second experiment on the tracking performance, we compared the state-of-the-art methods with the proposed method using the MOT 17 dataset, as shown in Table V. To verify the effectiveness of the proposed tracking method, the same performance was measured using 10 state-of-the-art methods: (1) DMAN [6], (2) DEEP_TAMA [42] using deep temporal appearance matching, (3) STRN [46] applying a spatial-temporal relation network, (4) FAMNet [23] using a multi-object assignment, (5) Tracktor [21], (6) TrctrD17 [33], (7) YoonKJ17 [47] applying one-shot-learning, (8) the proposed SiameseRF without rule distillation, and (9) the proposed SiameseRF with 30% rule distillation. Unlike in Table IV, in Table V, public detection results are listed instead of private detection to evaluate a more accurate tracking performance.…”
Section: Evaluation On Mot17 Challenge Datasetmentioning
confidence: 99%
“…Chen et al [38] utilized R-FCN detector scores [48] to select reliable candidate bounding boxes. Yoon et al [18] introduced an one-shot learning method for data association and integrated it into the MHT framework.…”
Section: Related Work a Multiple Object Trackingmentioning
confidence: 99%