2023
DOI: 10.54097/ajst.v5i2.5931
|View full text |Cite
|
Sign up to set email alerts
|

A Survey of Crowd Counting Algorithm Based on Domain Adaptation

Abstract: Crowd counting, the task of estimating the number of individuals in a crowded scene, has gained increasing attention in computer vision research. However, crowd counting remains a challenging problem due to the complex and diverse nature of crowd scenes. In recent years, domain adaptation has emerged as a promising approach to improve crowd counting performance by adapting a pre-trained model to a target domain with different characteristics. This paper provides a survey of domain adaptation-based crowd counti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…[175] Presents a model that works upon generative adversarial networks for semi-supervised learning Can improve performance by learning from both labeled and unlabeled data [176] Introduces a new method for learning invariant features for domain adaptation Can improve performance by learning representations that are invariant to domain shift [177] Introduces a new method for domain adaptation that combines invariant representations and self-supervised learning Can improve performance by learning representations that are invariant to domain shift and using self-supervised learning to learn features that are transferable to new domains [178] Presents a model that works upon meta-learning for transferable features Can improve performance by learning features that are transferable to new domains [179] Presents a model that works upon multi-task learning and attention Can improve performance by using multi-task learning and attention to learn representations that are invariant to domain shift [180] Presents a model that works upon multi-task learning for few-shot image classification Can improve performance by learning multiple tasks with few examples [181] Presents a model that works upon patch-level self-supervised learning Can improve performance by using patch-level self-supervised learning to learn features that are invariant to domain shift [182] Presents a model that works upon self-supervised contrastive learning Can improve performance by learning representations that are invariant to domain shift [183] Presents a model that works upon self-supervised contrastive learning Can improve performance by learning representations that are invariant to domain shift [184] Presents a model that works upon self-supervised learning and synthetic data Can improve performance by using self-supervised learning to learn features that are invariant to domain shift and by generating synthetic data that are like the target domain [185] Presents a model that works upon synthetic data and domain-invariant feature aggregation…”
Section: Paper Contribution Advantagesmentioning
confidence: 99%
“…[175] Presents a model that works upon generative adversarial networks for semi-supervised learning Can improve performance by learning from both labeled and unlabeled data [176] Introduces a new method for learning invariant features for domain adaptation Can improve performance by learning representations that are invariant to domain shift [177] Introduces a new method for domain adaptation that combines invariant representations and self-supervised learning Can improve performance by learning representations that are invariant to domain shift and using self-supervised learning to learn features that are transferable to new domains [178] Presents a model that works upon meta-learning for transferable features Can improve performance by learning features that are transferable to new domains [179] Presents a model that works upon multi-task learning and attention Can improve performance by using multi-task learning and attention to learn representations that are invariant to domain shift [180] Presents a model that works upon multi-task learning for few-shot image classification Can improve performance by learning multiple tasks with few examples [181] Presents a model that works upon patch-level self-supervised learning Can improve performance by using patch-level self-supervised learning to learn features that are invariant to domain shift [182] Presents a model that works upon self-supervised contrastive learning Can improve performance by learning representations that are invariant to domain shift [183] Presents a model that works upon self-supervised contrastive learning Can improve performance by learning representations that are invariant to domain shift [184] Presents a model that works upon self-supervised learning and synthetic data Can improve performance by using self-supervised learning to learn features that are invariant to domain shift and by generating synthetic data that are like the target domain [185] Presents a model that works upon synthetic data and domain-invariant feature aggregation…”
Section: Paper Contribution Advantagesmentioning
confidence: 99%