2022
DOI: 10.1155/2022/5708807
|View full text |Cite
|
Sign up to set email alerts
|

Enhancement of Local Crowd Location and Count: Multiscale Counting Guided by Head RGB-Mask

Abstract: Background. In crowded crowd images, traditional detection models often have the problems of inaccurate multiscale target count and low recall rate. Methods. In order to solve the above two problems, this paper proposes an MLP-CNN model, which combined with FPN feature pyramid can fuse the feature map of low-resolution and high-resolution semantic information with less computation and can effectively solve the problem of inaccurate head count of multiscale people. MLP-CNN “mid-term” fusion model can effectivel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 64 publications
0
3
0
Order By: Relevance
“…Ren and Lu et al [37] suggested an MLP-CNN model that, when used in conjunction with an FPN feature pyramid, can effectively resolve the issue of an erroneous head count of multiscale persons by fusing the feature map of low-resolution and high-resolution semantic information. Effective feature fusion of the RGB head image and RGB-Mask picture is possible with the MLP-CNN fusion model.…”
Section: A Deep Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…Ren and Lu et al [37] suggested an MLP-CNN model that, when used in conjunction with an FPN feature pyramid, can effectively resolve the issue of an erroneous head count of multiscale persons by fusing the feature map of low-resolution and high-resolution semantic information. Effective feature fusion of the RGB head image and RGB-Mask picture is possible with the MLP-CNN fusion model.…”
Section: A Deep Neural Networkmentioning
confidence: 99%
“…These weighted features are then used to make predictions. Attention Modules proposed in [36,37,40] allows the model to selectively focus on the most relevant parts of the input data when making predictions. The main function of an attention component is to enhance the performance of the model by dynamically weighting the importance of different parts of the input data.…”
Section: B Attention-guided Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation