2022
DOI: 10.1109/taslp.2022.3182856
|View full text |Cite
|
Sign up to set email alerts
|

EfficientTDNN: Efficient Architecture Search for Speaker Recognition

Abstract: Convolutional neural networks (CNNs), such as the time-delay neural network (TDNN), have shown their remarkable capability in learning speaker embedding. However, they meanwhile bring a huge computational cost in storage size, processing, and memory. Discovering the specialized CNN that meets a specific constraint requires a substantial effort of human experts. Compared with hand-designed approaches, neural architecture search (NAS) appears as a practical technique in automating the manual architecture design … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 44 publications
(49 reference statements)
0
3
0
Order By: Relevance
“…The ECAPA-TDNNLite based method [50] is a lightweight version of the ECAPA-TDNN based method, in which a large model, ECAPA-TDNN, is utilized for enrollment and a small model, ECAPA-TDNNLite, is used for verification. The EfficientTDNN based method [51] uses the NAS technique to design an efficient model to implement lightweight SV. The KD-based method [52] needs to train a large teacher model first, and then the KD technique is adopted to obtain a small student model based on the teacher model for realizing lightweight SV.…”
Section: Comparison Of Different Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The ECAPA-TDNNLite based method [50] is a lightweight version of the ECAPA-TDNN based method, in which a large model, ECAPA-TDNN, is utilized for enrollment and a small model, ECAPA-TDNNLite, is used for verification. The EfficientTDNN based method [51] uses the NAS technique to design an efficient model to implement lightweight SV. The KD-based method [52] needs to train a large teacher model first, and then the KD technique is adopted to obtain a small student model based on the teacher model for realizing lightweight SV.…”
Section: Comparison Of Different Methodsmentioning
confidence: 99%
“…In this section, we compare the proposed lightweight method to six state-of-the-art methods for lightweight SV, including the ECAPA-TDNNLite [50], EfficientTDNN [51], KD-based [52], Thin-ResNet34 [64], Fast-ResNet34 [65], and CSTCTS1dConv (Channel Split Time-Channel-Time Separable 1-dimensional Convolution) [66]. The ECAPA-TDNNLite based method [50] is a lightweight version of the ECAPA-TDNN based method, in which a large model, ECAPA-TDNN, is utilized for enrollment and a small model, ECAPA-TDNNLite, is used for verification.…”
Section: Comparison Of Different Methodsmentioning
confidence: 99%
See 1 more Smart Citation