Findings of the Association for Computational Linguistics: ACL 2023 2023
DOI: 10.18653/v1/2023.findings-acl.406
|View full text |Cite
|
Sign up to set email alerts
|

ML-LMCL: Mutual Learning and Large-Margin Contrastive Learning for Improving ASR Robustness in Spoken Language Understanding

Abstract: Spoken language understanding (SLU) is a fundamental task in the task-oriented dialogue systems. However, the inevitable errors from automatic speech recognition (ASR) usually impair the understanding performance and lead to error propagation. Although there are some attempts to address this problem through contrastive learning, they (1) treat clean manual transcripts and ASR transcripts equally without discrimination in fine-tuning; (2) neglect the fact that the semantically similar pairs are still pushed awa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…) to achieve the synchronous optimization. Thirdly, inspired by Ji et al (2022b); Cheng et al (2023a), we utilize contrastive learning to minimize the In-foNCE loss to maximize a lower bound on CIG.…”
Section: Methods Architecture Overviewmentioning
confidence: 99%
“…) to achieve the synchronous optimization. Thirdly, inspired by Ji et al (2022b); Cheng et al (2023a), we utilize contrastive learning to minimize the In-foNCE loss to maximize a lower bound on CIG.…”
Section: Methods Architecture Overviewmentioning
confidence: 99%
“…End-to-end SLU Cascaded SLU methods work on ASR transcripts, for which error propagation is a major challenge (Chang and Chen, 2022;Cheng et al, 2023a). Hence recently end-to-end methods have gained popularity (Serdyuk et al, 2018;Haghani et al, 2018), especially with the performance gap compared with cascaded systems mitigated in many cases thanks to the PTLM paradigm.…”
Section: Related Workmentioning
confidence: 99%