2023
DOI: 10.1007/s10489-023-04571-y
|View full text |Cite
|
Sign up to set email alerts
|

Full single-type deep learning models with multihead attention for speech enhancement

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 32 publications
0
1
0
Order By: Relevance
“…The multi-head attention mechanism serves as a crucial component of the Transformer model, enabling the capture of dependencies across various positions within the input sequence 46 . As shown in Fig.…”
Section: Methodsmentioning
confidence: 99%
“…The multi-head attention mechanism serves as a crucial component of the Transformer model, enabling the capture of dependencies across various positions within the input sequence 46 . As shown in Fig.…”
Section: Methodsmentioning
confidence: 99%