Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 2021
DOI: 10.18653/v1/2021.findings-acl.209
|View full text |Cite
|
Sign up to set email alerts
|

WIND: Weighting Instances Differentially for Model-Agnostic Domain Adaptation

Abstract: Domain Adaptation is a fundamental problem in machine learning and natural language processing. In this paper, we study the domain adaptation problem from the perspective of instance weighting. Conventional instance weighting approaches cannot learn the weights which make the model generalize well in target domain. To tackle this problem, inspired by meta-learning, we formulate the domain adaptation problem as a bi-level optimization problem, and propose a novel differentiable modelagnostic instance weighting … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 50 publications
0
8
0
Order By: Relevance
“…To reduce the computation cost, we use the approximation technique in (Chen et al, 2021) to compute the training guidance (i.e., ∂L M ( θ(w))…”
Section: Meta-learning Modulementioning
confidence: 99%
See 2 more Smart Citations
“…To reduce the computation cost, we use the approximation technique in (Chen et al, 2021) to compute the training guidance (i.e., ∂L M ( θ(w))…”
Section: Meta-learning Modulementioning
confidence: 99%
“…Comparing Methods Since the DaMSTF can be customized to both semi-supervised and unsupervised domain adaptation scenarios, the baselines contain both unsupervised and semisupervised domain adaptation approaches. For the unsupervised domain adaptation, Out (Chen et al, 2021), DANN (Ganin et al, 2016) and CRST (Zou et al, 2019) are selected as the baselines, while In+Out (Chen et al, 2021), MME (Saito et al, 2019), BiAT (Jiang et al, 2020), and Wind (Chen et al, 2021) are selected as the baselines for the semi-supervised domain adaptation. Out and In+Out are two straightforward ways for realizing unsupervised and semi-supervised domain adaptation, where Out means the base model is trained on the out-of-domain data (i.e., labeled source domain data) and In+Out means the base model is trained on both the in-domain and the out-of-domain data.…”
Section: Experiments Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…Methods for cross-domain text classification can be roughly categorized into two classes: taskagnostic methods, and pivot-based methods. The former includes divergence minimization [16,18,10], stacked denoising auto-encoders [9], discriminative adversarial training [7,20], instance reweighting [3] and so forth. There are also some works that combine task-agnostic methods with NLP-specific approaches or models [8,6].…”
Section: Cross-domain Text Classificationmentioning
confidence: 99%
“…The most prominent pivot-based methods are Structure Correspondence Learning (SCL) and its variants [2,25,27]. In SCL, the pivots are defined as the words that occur frequently on both source and target domains and behave in similar ways that are discriminable for the classification task 3 . The model can effectively learn domain-invariant features for pivots, but it is more challenging for the non-pivots as they have domain-specific meanings.…”
Section: Introductionmentioning
confidence: 99%