ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2023
DOI: 10.1109/icassp49357.2023.10096411
|View full text |Cite
|
Sign up to set email alerts
|

A Progressive Neural Network for Acoustic Echo Cancellation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 4 publications
0
1
0
Order By: Relevance
“…• There is a PCC=0.49 between using additional datasets and the overall scale. Only one team added additional data (LibriSpeech [50]), though they were the first-place team [51]. • The first-place entry showed that personalized AEC did increase performance, but only by a small amount (improving the final score by 0.002).…”
Section: Results and Analysismentioning
confidence: 99%
“…• There is a PCC=0.49 between using additional datasets and the overall scale. Only one team added additional data (LibriSpeech [50]), though they were the first-place team [51]. • The first-place entry showed that personalized AEC did increase performance, but only by a small amount (improving the final score by 0.002).…”
Section: Results and Analysismentioning
confidence: 99%
“…The first approach will be a basic normalized least means square (NLMS) filter after [3], as it has been widely used in AEC. Variations of this method are still frequently deployed in modern systems and subject to research [7], [8]. A further description of the specific model used here is given in Appendix A.…”
Section: B Echo Control (Ec) Methods Under Testmentioning
confidence: 99%
“…Recently, there has also been a great number of approaches employing deep neural networks (DNNs), either as a fully learned EC [10]- [14], as a residual (deep) echo 1 https://github.com/ifnspaml/EC-Evaluation-Toolbox suppression postfilter after a linear AEC [8], [15], [16], or as a hybrid approach combining parts of classical approaches with deep learning components [9], [17]. The most prominent recently published work is likely Microsoft's Deep-VQE model [14], performing AES, noise suppression, and speech dereverberation in a single network, thereby presenting a solution to challenging tasks that previously required a multi-stage network for good near-end speech preservation [11], [12].…”
Section: Introductionmentioning
confidence: 99%