2022
DOI: 10.1109/access.2022.3140901
|View full text |Cite
|
Sign up to set email alerts
|

A Two-Stage Deep Neuroevolutionary Technique for Self-Adaptive Speech Enhancement

Abstract: This paper presents a novel self-adaptive approach for speech enhancement in the context of highly nonstationary noise. A two-stage deep neuroevolutionary technique for speech enhancement is proposed. The first stage is composed of a deep neural network (DNN) method for speech enhancement. Two DNN methods were tested at this stage, namely, both a deep complex convolution recurrent network (DCCRN) and a residual long short-term memory neural network (ResLSTM). The ResLSTM method was combined with a minimum mean… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 43 publications
0
1
0
Order By: Relevance
“…A gradient clipping approach can be used to solve the problem. Long-Short-Term Memory (LSTM) [25], [26], [27] enhances gradient vanishing by providing a memory cell framework that facilitates information flow across network layers. In the recent past, LSTM-based SE has gained much attention [25], [28], [29], [30], [31], [32], [33], [34].…”
Section: Introductionmentioning
confidence: 99%
“…A gradient clipping approach can be used to solve the problem. Long-Short-Term Memory (LSTM) [25], [26], [27] enhances gradient vanishing by providing a memory cell framework that facilitates information flow across network layers. In the recent past, LSTM-based SE has gained much attention [25], [28], [29], [30], [31], [32], [33], [34].…”
Section: Introductionmentioning
confidence: 99%