Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1478
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Ensembling: Unsupervised Domain Adaptation for Political Document Analysis

Abstract: Insightful findings in political science often require researchers to analyze documents of a certain subject or type, yet these documents are usually contained in large corpora that do not distinguish between pertinent and nonpertinent documents. In contrast, we can find corpora that label relevant documents but have limitations (e.g., from a single source or era), preventing their use for political science research. To bridge this gap, we present adaptive ensembling, an unsupervised domain adaptation framewor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 24 publications
(29 reference statements)
0
9
0
Order By: Relevance
“…For example, classic methods such as tri-training constitute a strong baseline for domain shift in neural times (Ruder and Plank, 2018). Pseudo-labeling has recently been studied for parsing with contextualized word representations (Rotman and Reichart, 2019;Lim et al, 2020) and a recent work proposes adaptive ensembling (Desai et al, 2019) as extension of temporal ensembling (see hybrid methods in Section 6).…”
Section: Pseudo-labelingmentioning
confidence: 99%
See 2 more Smart Citations
“…For example, classic methods such as tri-training constitute a strong baseline for domain shift in neural times (Ruder and Plank, 2018). Pseudo-labeling has recently been studied for parsing with contextualized word representations (Rotman and Reichart, 2019;Lim et al, 2020) and a recent work proposes adaptive ensembling (Desai et al, 2019) as extension of temporal ensembling (see hybrid methods in Section 6).…”
Section: Pseudo-labelingmentioning
confidence: 99%
“…Work on the intersection of data-centric and model-centric methods can be plentiful. It currently includes combining semi-supervised objectives with an adversarial loss (Lim et al, 2020;Alam et al, 2018b), combining pivot-based approaches with pseudo-labeling (Cui and Bollegala, 2019) and very recently with contextualized word embeddings (Ben-David et al, 2020), and combining multi-task approaches with domain shift (Jia et al, 2019), multi-task learning with pseudo-labeling (multi-task tritraining) (Ruder and Plank, 2018), and adaptive ensembling (Desai et al, 2019), which uses a studentteacher network with a consistency-based self-ensembling loss and a temporal curriculum. They apply adaptive ensembling to study temporal and topic drift in political data classification (Desai et al, 2019).…”
Section: Hybrid Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…Raw data Since polarization is a process that needs to be analyzed over time (DiMaggio et al, 1996), our annotated articles are sampled from a diachronic corpus of 1,749 news articles across nearly 3 decades (from 1947 till 1974). Articles in this corpus are from political news articles of Desai et al (2019) from the Corpus of Historical American English (COHA, Davies ( 2012)) covering years 1922-1986. These 1,749 articles are extracted such that: (1) they cover broad and politically relevant topics (ranging from education and health to economy) but still share discussions related to the federal budget to make our annotations tractable 4 ; (2) balanced in the number of articles across 5 news outlets with center-left, central, and center-right ideology (c.f.…”
Section: Data Collection and Annotationmentioning
confidence: 99%
“…• Feature projection [5], [8], [9] • Instance re-weighting [10]- [12] • Pivot feature centric [13], [14] • Domain Adversarial / Gradient Reversal based [6], [15]- [19] Feature projection signifies bringing the features of source and target domain to a joint latent space. [8] used stacked autoencoders to learn domain adaptive feature representations for sentiment analysis.…”
Section: Related Work a Unsupervised Domain Adaptation (Uda)mentioning
confidence: 99%