2023
DOI: 10.3390/pr11071935
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Dynamic Process Monitoring Algorithm: Dynamic Orthonormal Subspace Analysis

Abstract: Orthonormal subspace analysis (OSA) is proposed for handling the subspace decomposition issue and the principal component selection issue in traditional key performance indicator (KPI)-related process monitoring methods such as partial least squares (PLS) and canonical correlation analysis (CCA). However, it is not appropriate to apply the static OSA algorithm to a dynamic process since OSA pays no attention to the auto-correlation relationships in variables. Therefore, a novel dynamic OSA (DOSA) algorithm is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 30 publications
0
1
0
Order By: Relevance
“…For instance, the PCA-based control charts and associated modifications are all committed to transforming the raw data into a reduced set of independent features, thus employing Hoteling's T 2 statistic, under the presumed gaussianity. After that, other machine learning methods, such as CCA and PLS [14][15][16] as well as some deep unsupervised architectures [17][18][19][20][21], also value this idea and are devoted to the construction of the same statistic from the reduced space of transformed features. It was until the inception of the variational auto-encoder (VAE) that deep learners switched focus on using strong transformation to secure the distribution premise before using them for threshold determination [22][23][24][25][26][27].…”
Section: Introductionmentioning
confidence: 99%
“…For instance, the PCA-based control charts and associated modifications are all committed to transforming the raw data into a reduced set of independent features, thus employing Hoteling's T 2 statistic, under the presumed gaussianity. After that, other machine learning methods, such as CCA and PLS [14][15][16] as well as some deep unsupervised architectures [17][18][19][20][21], also value this idea and are devoted to the construction of the same statistic from the reduced space of transformed features. It was until the inception of the variational auto-encoder (VAE) that deep learners switched focus on using strong transformation to secure the distribution premise before using them for threshold determination [22][23][24][25][26][27].…”
Section: Introductionmentioning
confidence: 99%