2018
DOI: 10.1252/jcej.17we064
|View full text |Cite
|
Sign up to set email alerts
|

Fault Detection Based on Probabilistic Kernel Partial Least Square Regression for Industrial Processes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 33 publications
0
10
0
Order By: Relevance
“…In PKPLS, the input sample matrix Φ ( X ) ∈ R D and output sample matrix Y ∈ R s are assumed to be generated from the latent variable t ∈ R k , which is described as follows: lefttrueΦboldX=At+boldμΦboldX+boldeΦboldX,Y=Bt+boldμboldY+boldeboldY, where matrices A ∈ R D × k and B ∈ R s × k denote the loading matrices for input and output, respectively; μ Φ ( X ) and μ Y denote the mean vectors of input and output, respectively; t denotes the latent variable vector subjecting to Gaussian distributions, ie, t ~N(0, I ); I denotes an identity matrix; and e Φ ( X ) and e Y are the noises contained in input and output, respectively, which are subject to Gaussian distributions, ie, e Φ ( X ) ~N(0, Ω Φ ( X ) ), and e Y ~N(0, Ω Y ), with ΩboldΦ()X=σboldΦ()X2boldI and ΩY=σY2boldI as the covariance matrices of input noise and output noise, respectively.…”
Section: A Review Of Pkpls Modelmentioning
confidence: 99%
See 3 more Smart Citations
“…In PKPLS, the input sample matrix Φ ( X ) ∈ R D and output sample matrix Y ∈ R s are assumed to be generated from the latent variable t ∈ R k , which is described as follows: lefttrueΦboldX=At+boldμΦboldX+boldeΦboldX,Y=Bt+boldμboldY+boldeboldY, where matrices A ∈ R D × k and B ∈ R s × k denote the loading matrices for input and output, respectively; μ Φ ( X ) and μ Y denote the mean vectors of input and output, respectively; t denotes the latent variable vector subjecting to Gaussian distributions, ie, t ~N(0, I ); I denotes an identity matrix; and e Φ ( X ) and e Y are the noises contained in input and output, respectively, which are subject to Gaussian distributions, ie, e Φ ( X ) ~N(0, Ω Φ ( X ) ), and e Y ~N(0, Ω Y ), with ΩboldΦ()X=σboldΦ()X2boldI and ΩY=σY2boldI as the covariance matrices of input noise and output noise, respectively.…”
Section: A Review Of Pkpls Modelmentioning
confidence: 99%
“…From Equation , the joint distribution of Φ ( X ) and Y is as follows: ()centerΦboldX,Y|boldt,boldA,boldB,ΩnormalΦ()X,ΩYbold~boldN()μnormalΦ()X,boldY|t,normalΦ()X,boldY|t, where μboldΦ()X,boldY|t=()boldAt+μboldΦ()XboldBt+μY, boldΦ()X,boldY|t=()ΩboldΦ()X00ΩY.…”
Section: A Review Of Pkpls Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…The last decade has witnessed the development and usages of the nonlinear variant of projection based MSPM strategies to primarily address the Fault Detection and Diagnosis of complex process industries displaying nonlinear profiles amongst the associated Process and Feedstock characteristics. The nonlinear counterparts of the linear based MSPM techniques can be broadly classified as those based on linear approximation techniques [5,6], Kernel Function based techniques [7][8][9][10] and Artificial Intelligence (AI) [11] based techniques. In linear approximation based technique the entire data space is approximated by several linear subspaces and separate monitoring strategy is developed for each subspace which are further amalgamated using Bayesian Inferences, while in Kernel Function based technique the nonlinear data matrix or the measurement space is mapped to higher dimensional feature space using Kernel Function such as Gaussian Kernel function where data behaves linearly and PCA is implemented over the feature space for components extraction.…”
Section: Introductionmentioning
confidence: 99%