2022
DOI: 10.48550/arxiv.2204.06439
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Receptive Field Analysis of Temporal Convolutional Networks for Monaural Speech Dereverberation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…As the long short-term memory network (LSTM) method has excellent sequence modeling ability and automatically detects word-level features, it has been widely used for name entity recognition (NER) tasks, and most of the new methods of name entity recognition are based on LSTM models ( Li et al, 2022 ; Chen et al, 2022 ; Rajput et al, 2022 ; Karim et al, 2022 ). However, as the inherent network architecture of the recurrent neural network, LSTM and recurrent neural network (RNN) variant models ignore the text structure and the semantic connection of the context, the word vector represented by the ordinary method is only the static word vector before the word-level information and does not contain the semantic information of the context ( Ravenscroft, Goetze & Hain, 2022 ; Cui, Li & Xu, 2022 ; Li et al, 2021 ; Lang et al, 2021 ).…”
Section: Introductionmentioning
confidence: 99%
“…As the long short-term memory network (LSTM) method has excellent sequence modeling ability and automatically detects word-level features, it has been widely used for name entity recognition (NER) tasks, and most of the new methods of name entity recognition are based on LSTM models ( Li et al, 2022 ; Chen et al, 2022 ; Rajput et al, 2022 ; Karim et al, 2022 ). However, as the inherent network architecture of the recurrent neural network, LSTM and recurrent neural network (RNN) variant models ignore the text structure and the semantic connection of the context, the word vector represented by the ordinary method is only the static word vector before the word-level information and does not contain the semantic information of the context ( Ravenscroft, Goetze & Hain, 2022 ; Cui, Li & Xu, 2022 ; Li et al, 2021 ; Lang et al, 2021 ).…”
Section: Introductionmentioning
confidence: 99%