Proceedings of the 29th ACM International Conference on Multimedia 2021
DOI: 10.1145/3474085.3475377
|View full text |Cite
|
Sign up to set email alerts
|

Coarse to Fine: Domain Adaptive Crowd Counting via Adversarial Scoring Network

Abstract: Recent deep networks have convincingly demonstrated high capability in crowd counting, which is a critical task attracting widespread attention due to its various industrial applications. Despite such progress, trained data-dependent models usually can not generalize well to unseen scenarios because of the inherent domain shift. To facilitate this issue, this paper proposes a novel adversarial scoring network (ASNet) to gradually bridge the gap across domains from coarse to fine granularity. In specific, at th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(13 citation statements)
references
References 49 publications
0
13
0
Order By: Relevance
“…Table 3 (* means that only the source domain image passes the style transfer layer when training the model) shows that our method gets the best MAE metric. Particularly, the RMSE of our method improves by 7.6% compared with the recent domain adaptation method ASNet [10]. And from the results in the table, we can conclude that our method gets a better result than the results that only let the source domain data pass through the style transfer layer.…”
Section: Comparison With Other State-of-the-art Algorithmsmentioning
confidence: 72%
See 1 more Smart Citation
“…Table 3 (* means that only the source domain image passes the style transfer layer when training the model) shows that our method gets the best MAE metric. Particularly, the RMSE of our method improves by 7.6% compared with the recent domain adaptation method ASNet [10]. And from the results in the table, we can conclude that our method gets a better result than the results that only let the source domain data pass through the style transfer layer.…”
Section: Comparison With Other State-of-the-art Algorithmsmentioning
confidence: 72%
“…Recently, to address different crowd scales and density distributions between different domains, CODA [8] performed adversarial training with multi‐scale image pyramids from two domains and achieved results close to the state‐of‐the‐art fully supervised model trained on the target domain. ASNet [10] adopted a dual discriminators strategy to close the source domain to the target domain from the perspective of global and local feature spaces through adversarial learning. However, these methods over‐optimise samples close to the target domain on the source domain, so the model's performance is improved on the target domains but significantly degraded on the source domains.…”
Section: Introductionmentioning
confidence: 99%
“…Zou et al [3] proposed an adversarial scoring network (ASNet) that gradually closes the cross-domain gap from coarse-to fine-grained. The coarse-grained phase designs a dual discriminator strategy to adapt the source domain from the global and local feature space perspectives close to the target domain.…”
Section: Methods Based On the Distribution Strategymentioning
confidence: 99%
“…Cross-domain / Multi-domain Learning. Many researchers exploit the cross-domain problems [40,41,42,43,25] in crowd counting, including cross-scene [32], cross-view [44], cross-modal [45], etc. Adversarial Scoring Network [41] is applied to adapt to the target domain from coarse to fine granularity.…”
Section: Related Workmentioning
confidence: 99%
“…Many researchers exploit the cross-domain problems [40,41,42,43,25] in crowd counting, including cross-scene [32], cross-view [44], cross-modal [45], etc. Adversarial Scoring Network [41] is applied to adapt to the target domain from coarse to fine granularity. Besides, cross-domain features can be extracted by the message-passing mechanisms based on a graph neural network [46].…”
Section: Related Workmentioning
confidence: 99%