Findings of the Association for Computational Linguistics: ACL 2023 2023
DOI: 10.18653/v1/2023.findings-acl.514
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Training for Low-Resource Disfluency Correction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…In low-resource settings, adversarial training helps transformers improve the representations it learns for downstream tasks. We use the Seq-GAN-BERT model (Bhat et al, 2023), which supports adversarial training for transformers utilizing labeled and unlabeled data for token classificationbased DC. Unlabeled data is used from helper datasets specified in section 3.5.…”
Section: Transformer With Adversarial Training (Seq-gan-bert)mentioning
confidence: 99%
“…In low-resource settings, adversarial training helps transformers improve the representations it learns for downstream tasks. We use the Seq-GAN-BERT model (Bhat et al, 2023), which supports adversarial training for transformers utilizing labeled and unlabeled data for token classificationbased DC. Unlabeled data is used from helper datasets specified in section 3.5.…”
Section: Transformer With Adversarial Training (Seq-gan-bert)mentioning
confidence: 99%