ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2022
DOI: 10.1109/icassp43922.2022.9746529
|View full text |Cite
|
Sign up to set email alerts
|

Self-Knowledge Distillation via Feature Enhancement for Speaker Verification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(1 citation statement)
references
References 54 publications
0
1
0
Order By: Relevance
“…Additionally, some researchers applied the techniques of Knowledge Distillation (KD) [46], [47] and Neural Architecture Search (NAS) [48] to implement lightweight SV [49]- [52]. In the work of [49], the strategy of teacher-student training was proposed for text-independent SV, and competitive error rate with 88-93% smaller models was obtained.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, some researchers applied the techniques of Knowledge Distillation (KD) [46], [47] and Neural Architecture Search (NAS) [48] to implement lightweight SV [49]- [52]. In the work of [49], the strategy of teacher-student training was proposed for text-independent SV, and competitive error rate with 88-93% smaller models was obtained.…”
Section: Related Workmentioning
confidence: 99%