2019 IEEE Winter Conference on Applications of Computer Vision (WACV) 2019
DOI: 10.1109/wacv.2019.00216
|View full text |Cite
|
Sign up to set email alerts
|

Omnidirectional Pedestrian Detection by Rotation Invariant Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 44 publications
(40 citation statements)
references
References 28 publications
0
39
1
Order By: Relevance
“…Considering the attractive performance and straightforward implementation, most existing omnidirectional pedestrian detection methods (Nguyen et al, 2016;Seidel et al, 2019;Li et al, 2019;Duan et al, 2020), as well as ours in the conference paper (Tamura et al, 2019), adopt YOLO as a base detector. YOLO uses pixel-wise dense prediction, which outputs duplicated detection results, and thus requires a suppression postprocess.…”
Section: Object Detectionmentioning
confidence: 99%
See 3 more Smart Citations
“…Considering the attractive performance and straightforward implementation, most existing omnidirectional pedestrian detection methods (Nguyen et al, 2016;Seidel et al, 2019;Li et al, 2019;Duan et al, 2020), as well as ours in the conference paper (Tamura et al, 2019), adopt YOLO as a base detector. YOLO uses pixel-wise dense prediction, which outputs duplicated detection results, and thus requires a suppression postprocess.…”
Section: Object Detectionmentioning
confidence: 99%
“…The default data augmentation is used in this type. We consider this detector as a baseline because this training method is that of our conference paper (Tamura et al, 2019) enhanced with the angle-aware detection.…”
Section: Quantitative Comparisonsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the transformation relies heavily on calibrated camera parameters, which requires user-interaction. Other works [29,22] train orientation-aware networks using pedestrian images with synthetic rotation. However, such training introduces large computational cost and the trained network is biased towards the data used.…”
Section: Omnidirectional Video Analysismentioning
confidence: 99%