ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
DOI: 10.1109/icassp40776.2020.9052938
|View full text |Cite
|
Sign up to set email alerts
|

Tensor-To-Vector Regression for Multi-Channel Speech Enhancement Based on Tensor-Train Network

Abstract: We propose a tensor-to-vector regression approach to multi-channel speech enhancement in order to address the issue of input size explosion and hidden-layer size expansion. The key idea is to cast the conventional deep neural network (DNN) based vector-to-vector regression formulation under a tensor-train network (TTN) framework. TTN is a recently emerged solution for compact representation of deep models with fully connected hidden layers. Thus TTN maintains DNN's expressive power yet involves a much smaller … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 19 publications
(8 citation statements)
references
References 25 publications
1
7
0
Order By: Relevance
“…The matrix associated with a DNN hidden layer corresponds to two matrices given the ranks, and the DNN input vector is reshaped into a higher-order input tensor. We have shown that the TT decomposition can keep the representation power of DNN [17]. In [17], we have also demonstrated that for a tensor-to-vector function…”
Section: Dnn-tt Based Tensor-to-vector Regressionmentioning
confidence: 73%
See 2 more Smart Citations
“…The matrix associated with a DNN hidden layer corresponds to two matrices given the ranks, and the DNN input vector is reshaped into a higher-order input tensor. We have shown that the TT decomposition can keep the representation power of DNN [17]. In [17], we have also demonstrated that for a tensor-to-vector function…”
Section: Dnn-tt Based Tensor-to-vector Regressionmentioning
confidence: 73%
“…We have shown that the TT decomposition can keep the representation power of DNN [17]. In [17], we have also demonstrated that for a tensor-to-vector function…”
Section: Dnn-tt Based Tensor-to-vector Regressionmentioning
confidence: 73%
See 1 more Smart Citation
“…Since it has been shown that RNNs overfit very quickly [11], various regularization methods, such as early stopping or small and under-specified models [12], have to be used during the RNN training stage. Although dropout is normally taken as a simple and effective regularization to overcome the problem of overfitting in deep neural networks [13,14], it has been concluded that the naive dropout regularization to recurrent weights in RNNs cannot reliably solve the RNN overfitting problem because noise added in the recurrent connections leads to model instabilities [15].…”
Section: Introductionmentioning
confidence: 99%
“…Thus, this paper aims at bridging this gap. In particular, we investigate MAE and MSE in terms of performance error bounds and robustness against various noises in the context of the deep neural network (DNN) based vector-to-vector regression, since DNNs offer better representation power and generalization capability in large-scale regression problems, such as those addressed in [18]- [21].…”
Section: Introductionmentioning
confidence: 99%