Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval 2020
DOI: 10.1145/3397271.3401242
|View full text |Cite
|
Sign up to set email alerts
|

Domain Adaptation with Reconstruction for Disaster Tweet Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(22 citation statements)
references
References 10 publications
0
22
0
Order By: Relevance
“…The main empirical focus of our work is to measure the effectiveness of our approach when no labeled data is available from target domain. To this effect, considering an UDA setting, we pair up public datasets from disaster management as source (S) and target (T) for our experiments and supervise our model (refer to Section III-C) on only labeled instances from S and test on T. While previous works are restricted to datasets involving only binary classification [5], [6], we also report results on public datasets involving more realistic and useful multi-label multi-class classification setting. Additionally, to verify the effectiveness of our approach in limited labeled data scenarios we reduce the source domain training data.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…The main empirical focus of our work is to measure the effectiveness of our approach when no labeled data is available from target domain. To this effect, considering an UDA setting, we pair up public datasets from disaster management as source (S) and target (T) for our experiments and supervise our model (refer to Section III-C) on only labeled instances from S and test on T. While previous works are restricted to datasets involving only binary classification [5], [6], we also report results on public datasets involving more realistic and useful multi-label multi-class classification setting. Additionally, to verify the effectiveness of our approach in limited labeled data scenarios we reduce the source domain training data.…”
Section: Methodsmentioning
confidence: 99%
“…• Feature projection [5], [8], [9] • Instance re-weighting [10]- [12] • Pivot feature centric [13], [14] • Domain Adversarial / Gradient Reversal based [6], [15]- [19] Feature projection signifies bringing the features of source and target domain to a joint latent space. [8] used stacked autoencoders to learn domain adaptive feature representations for sentiment analysis.…”
Section: Related Work a Unsupervised Domain Adaptation (Uda)mentioning
confidence: 99%
See 3 more Smart Citations