2021
DOI: 10.48550/arxiv.2110.00165
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Large-scale ASR Domain Adaptation using Self- and Semi-supervised Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5

Relationship

4
1

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…[24] also designs a similar hybrid multitask learning to train acoustic models under low-resource settings, comprising of supervised CTC, attention and self-supervised reconstruction losses. Similarly, [25] combines Fig. 1: An overview of our JUST framework.…”
Section: Related Workmentioning
confidence: 99%
“…[24] also designs a similar hybrid multitask learning to train acoustic models under low-resource settings, comprising of supervised CTC, attention and self-supervised reconstruction losses. Similarly, [25] combines Fig. 1: An overview of our JUST framework.…”
Section: Related Workmentioning
confidence: 99%
“…Domain Adaptation (DA) for building efficient ASR systems has been a well-studied topic in literature with early work focusing on regularization [24,25], teacher-student learning [26,27] or adversarial learning [28,29]. Lately, unsupervised domain adaptation of ASR models has been gaining traction and researchers have been trying to find ways to use huge amounts of unlabeled data from the target domain for DA [27,30,31]. Continued pre-training is another common approach used [32].…”
Section: Domain Adaptationmentioning
confidence: 99%
“…Filtering noisy labels has a long history and can be seen as a form of curriculum learning [23]. More recently it has been explored in semi-supervised settings in ASR [24]. See [25,26] for more detailed surveys.…”
Section: Introductionmentioning
confidence: 99%