2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2018
DOI: 10.1109/icassp.2018.8461587
|View full text |Cite
|
Sign up to set email alerts
|

Attention-Based Models for Text-Dependent Speaker Verification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
53
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 99 publications
(67 citation statements)
references
References 3 publications
0
53
0
Order By: Relevance
“…al need to ask teachers to wear the LENA system during the entire teaching process and use differences in volume and pitch in order to assess when teachers were speaking or students were speaking. Please note that CAD is different from the classic speaker verification [14,15,16] and speaker diarization [17] where (1) there is no enrollment-verification 2-stage process in CAD tasks; and (2) not every speaker need to be identified.…”
Section: Related Workmentioning
confidence: 99%
“…al need to ask teachers to wear the LENA system during the entire teaching process and use differences in volume and pitch in order to assess when teachers were speaking or students were speaking. Please note that CAD is different from the classic speaker verification [14,15,16] and speaker diarization [17] where (1) there is no enrollment-verification 2-stage process in CAD tasks; and (2) not every speaker need to be identified.…”
Section: Related Workmentioning
confidence: 99%
“…The attention mechanism has been studied by several authors in ASV, e.g., [13,19,20]. However, most of the proposals used the attention mechanism to produce a better frame pooling, while we use it to produce a better utterance alignment.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, we built a phone-blind attention system where the attention weight is computed from the speaker feature itself, rather than phonetic features. This approach is similar to the work in [19,20], though the attention function is not trained. This system is denoted by Att-Spk.…”
Section: Settingsmentioning
confidence: 99%
“…In [10], multi-head self-attention mechanism was applied. In [28], the authors explored different topologies and their variants of attention layer, and compared different pooling methods on attention weights.…”
Section: Introductionmentioning
confidence: 99%