ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019
DOI: 10.1109/icassp.2019.8683611
|View full text |Cite
|
Sign up to set email alerts
|

Latent Representation Learning for Artificial Bandwidth Extension Using a Conditional Variational Auto-encoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 19 publications
0
5
0
Order By: Relevance
“…VAE combined with neural layers and its versions were explained for the uses of anomaly detection. Also, the process of forecasting became more efficient [53][54][55][56].…”
Section: V) Variational Auto-encoders (Vae)mentioning
confidence: 99%
“…VAE combined with neural layers and its versions were explained for the uses of anomaly detection. Also, the process of forecasting became more efficient [53][54][55][56].…”
Section: V) Variational Auto-encoders (Vae)mentioning
confidence: 99%
“…VAEs are relatively a new class of AEs in energy systems which employ the concept of variational inference and Bayesian optimization to carry out the encoding and decoding operations [126, 127]. VAE integrated with recurrent neural layers and its variants have been recently explored for the application of anomaly detection and to make the forecasting process more efficient [128–131]. To reduce the dimensionality arising from time lags considered in renewable generation forecasting, VAEs have been implemented in [132] before performing forecasts with BLSTM.…”
Section: Data Pre‐processing Techniques For Energy Forecastingmentioning
confidence: 99%
“…tion of anomaly detection and to make the forecasting process more efficient [128][129][130][131]. To reduce the dimensionality arising from time lags considered in renewable generation forecasting, VAEs have been implemented in [132] before performing forecasts with BLSTM.…”
Section: Variational Auto-encoders (Vae)mentioning
confidence: 99%
“…In this context, variational autoencoders (VAE), widely used in the existing deep learning literature for dimensionality reduction [28], can be implemented to improve the computational efficiency of Bayesian deep learning techniques. VAE represents the encoder with probability distributions which transforms the input data to adopt a lower dimensional structure [29].…”
Section: Introductionmentioning
confidence: 99%