2022 IEEE Spoken Language Technology Workshop (SLT) 2023
DOI: 10.1109/slt54892.2023.10022474
|View full text |Cite
|
Sign up to set email alerts
|

Improving Generalizability of Distilled Self-Supervised Speech Processing Models Under Distorted Settings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…Recent work has shown that data augmentation during the knowledge distillation process can improve the performance of compressed models in mismatched domain scenarios without compromising the model's size [18]. However, there are still limitations on how much robustness a model can acquire from the distillation process.…”
Section: Innovation #3: Environment Awarenessmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent work has shown that data augmentation during the knowledge distillation process can improve the performance of compressed models in mismatched domain scenarios without compromising the model's size [18]. However, there are still limitations on how much robustness a model can acquire from the distillation process.…”
Section: Innovation #3: Environment Awarenessmentioning
confidence: 99%
“…While domain adaptation techniques, such as those proposed in "Robust HuBERT" [15] and "deHuBERT" [16], can alleviate this problem, the methods are not directly applicable for compression. Recent works have started to propose solutions that tackle compression and environmental robustness jointly (e.g., [17,18]). These solutions, however, have yet to be explored for ASR and have shown some sensitivity to varying environmental conditions.…”
Section: Introductionmentioning
confidence: 99%