2019
DOI: 10.1016/j.engappai.2019.04.013
|View full text |Cite
|
Sign up to set email alerts
|

Process monitoring using variational autoencoder for high-dimensional nonlinear processes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
41
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 132 publications
(45 citation statements)
references
References 35 publications
0
41
0
Order By: Relevance
“…Usually the negative expected log-likelihood (e.g., the crossentropy function) is used ( [30], [31], [33]- [35]) but the mean squared error [32] can also be used. The second term L KL (equ.…”
Section: Variational Autoencodersmentioning
confidence: 99%
See 1 more Smart Citation
“…Usually the negative expected log-likelihood (e.g., the crossentropy function) is used ( [30], [31], [33]- [35]) but the mean squared error [32] can also be used. The second term L KL (equ.…”
Section: Variational Autoencodersmentioning
confidence: 99%
“…In nonlinear processes monitoring, VAE have been recently used for high-dimensional process fault diagnosis. The most relevant characteristics of the process are extracted by the latent variable space by projecting the high-dimensional process data into a lower-dimensional space [8], [29], [31], [32], [34], [39]- [42].…”
Section: Variational Autoencodersmentioning
confidence: 99%
“…In 2019, variational autoencoders have been widely used to analyze different kind of signals and monitoring them [22,23]. In addition, in Zemouri et al [24], variational autoencoders have been used for train a model as a 2D visualization tool for partial discharge source classification.…”
Section: Deep Learning Techniquesmentioning
confidence: 99%
“…Deep generative models (DGMs) are part of the deep models family and are a powerful way to learn any distribution of observed data through unsupervised learning. The DGMs are composed mainly by variational autoencoders (VAEs) [1][2][3][4], and generative adversarial networks (GANs) [5]. The VAEs are mainly used to extract features from the input vector in an unsupervised way while the GANs are used to generate synthetic samples through an adversarial learning by achieving an equilibrium between a Generator and a Discriminator.…”
Section: Introductionmentioning
confidence: 99%
“…The VAEs have met with great success in recent years in several applicative areas including anomaly detection [6][7][8][9], text classification [10], sentence generation [11], speech synthesis and recognition [12][13][14], spatio-temporal solar irradiance forecasting [15] and in geoscience for data assimilation [2]. In other respects, the two major application areas of the VAEs are the biomedical and healthcare recommendation [16][17][18][19], and industrial applications for nonlinear processes monitoring [1,3,4,[20][21][22][23][24][25].…”
Section: Introductionmentioning
confidence: 99%