2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00326
|View full text |Cite
|
Sign up to set email alerts
|

RAPiD: Rotation-Aware People Detection in Overhead Fisheye Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 65 publications
(36 citation statements)
references
References 21 publications
0
36
0
Order By: Relevance
“…This is done by first detecting and tracking the persons in 360 • video frames who are close to the centre, and then extracting spatio-temporal features from them. For person detection, we use the Rotation-Aware People Detection method RAPiD [22] trained on top-view 360 • images. It outputs the bounding box co-ordinates of all persons present in the 360 • video frame (co-ordinates of the centroid of the bounding box, its width, height, angle of rotation and confidence of detection).…”
Section: Proposed Methodsmentioning
confidence: 99%
“…This is done by first detecting and tracking the persons in 360 • video frames who are close to the centre, and then extracting spatio-temporal features from them. For person detection, we use the Rotation-Aware People Detection method RAPiD [22] trained on top-view 360 • images. It outputs the bounding box co-ordinates of all persons present in the 360 • video frame (co-ordinates of the centroid of the bounding box, its width, height, angle of rotation and confidence of detection).…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Considering the attractive performance and straightforward implementation, most existing omnidirectional pedestrian detection methods (Nguyen et al, 2016;Seidel et al, 2019;Li et al, 2019;Duan et al, 2020), as well as ours in the conference paper (Tamura et al, 2019), adopt YOLO as a base detector. YOLO uses pixel-wise dense prediction, which outputs duplicated detection results, and thus requires a suppression postprocess.…”
Section: Object Detectionmentioning
confidence: 99%
“…However, they require a certain amount of computational time to process multiple images transformed from an omnidirectional image, which substantially degrades the detection speed. Duan et al (2020) proposed a rotation-aware pedestrian detector and several omnidirectional pedestrian detection datasets that are annotated with rotated bounding boxes. To predict the angles of the boxes, an angle-aware loss is introduced into YOLOv3.…”
Section: Omnidirectional Pedestrian Detectionmentioning
confidence: 99%
See 2 more Smart Citations