2019 IEEE International Conference on Image Processing (ICIP) 2019
DOI: 10.1109/icip.2019.8803409
|View full text |Cite
|
Sign up to set email alerts
|

Disam: Density Independent and Scale Aware Model for Crowd Counting and Localization

Abstract: People counting in high density crowds is emerging as a new frontier in crowd video surveillance. Crowd counting in high density crowds encounters many challenges, such as severe occlusions, few pixels per head, and large variations in person's head sizes. In this paper, we propose a novel Density Independent and Scale Aware model (DISAM), which works as a head detector and takes into account the scale variations of heads in images. Our model is based on the intuition that head is the only visible part in high… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 27 publications
(22 citation statements)
references
References 17 publications
0
22
0
Order By: Relevance
“…Lastly, since motion information is not included when training the model presented in [25], if an object that is tracked is moving in one direction and it gets partially occluded by a similar object moving in the opposing direction, there is a chance that the tracker will latch onto the wrong object. In [26], the authors propose a neural network model that can localize the exact position of people in a scene. With this information, people can be detected and tracked in dense environments.…”
Section: Multi-target Tracking and Data Associationmentioning
confidence: 99%
See 1 more Smart Citation
“…Lastly, since motion information is not included when training the model presented in [25], if an object that is tracked is moving in one direction and it gets partially occluded by a similar object moving in the opposing direction, there is a chance that the tracker will latch onto the wrong object. In [26], the authors propose a neural network model that can localize the exact position of people in a scene. With this information, people can be detected and tracked in dense environments.…”
Section: Multi-target Tracking and Data Associationmentioning
confidence: 99%
“…The rest of the points are generated with a spreading factor of λ around the mean, as depicted in (26) and (27).…”
Section: The Unscented Kalman Filter and Sensor Fusionmentioning
confidence: 99%
“…For WIDERFACE dataset, we use different group of face detectors that includes CMS-RCNN [58], Multitask-CNN [55], ACF [48], and TinyFace [13]. While head detector group includes, DPM-Head [52], DHD [35], FCHD [44] and DISAM [17].…”
Section: B Comparison With Specific Head Detectorsmentioning
confidence: 99%
“…Due to small size of head, DPM detector could not detect the heads with the size less than 23 × 23 pixels. DISAM [17], on the other hand achieved comparable results by tackling scale problem to some extent, however the method suffers from the following limitations: (1) The models follows the traditional pipeline of R-CNN which uses scale-aware strategy for object proposal generation. The strategy typically requires human efforts to generate a scale map.…”
Section: B Comparison With Specific Head Detectorsmentioning
confidence: 99%
“…Serious games are also called applied games and this concept of gaming is used by industries like defense [3], health care [4], emergency management [5], [6], education [7], exploration [8], city planning [9], engineering [10], and politics [11]. In addition to this, computer vision and deep learning based techniques can be exploited in gaming context to analyze performances of sports players through tracking [12], [13] in an virtual environment [14], detection [15], [16], analysing anomalous behaviour [17], [18], simulating individual [19], [20] and crowd behaviour [21]- [25] for public infrastructure design [26], [27], and a variety of other multi media applications [28], [29]. Fig.…”
Section: Introductionmentioning
confidence: 99%