2022
DOI: 10.1016/j.jocs.2022.101760
|View full text |Cite
|
Sign up to set email alerts
|

Centroid based person detection using pixelwise prediction of the position

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 27 publications
0
9
0
Order By: Relevance
“…Three centroid‐based object detectors (Dolezel et al, 2022 ), which return the coordinates of scale centroids, form the core of the scale counting system (Figure S1 ). The scale detectors utilize localization maps generated using U‐Net models (Ronneberger et al, 2015 ).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Three centroid‐based object detectors (Dolezel et al, 2022 ), which return the coordinates of scale centroids, form the core of the scale counting system (Figure S1 ). The scale detectors utilize localization maps generated using U‐Net models (Ronneberger et al, 2015 ).…”
Section: Methodsmentioning
confidence: 99%
“…The target scales in the maps are represented as circles with centroids placed at coordinates corresponding to the centers of the scales. The circle circumferences represent the borders of nonzero values in the maps, where values at the centroid coordinates are at one, and the values within a circle decrease to zero with increasing distance from the circle centroid (Dolezel et al, 2022 ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The target scales in the maps are represented as circles with centroids placed at coordinates corresponding to the centres of the scales. The circle circumferences represent the borders of nonzero values in the maps, where values at the centroid coordinates are at one, and the values within a circle decrease to zero with increasing distance from the circle centroid (Dolezel et al 2022).…”
Section: Methodsmentioning
confidence: 99%
“…We trained and validated the U-Net models on a set of 227 expertannotated photos, following the recommended training-validation procedure (Dolezel et al 2022). The annotator marked a centre of each target scale in each photo.…”
Section: Methodsmentioning
confidence: 99%