2021 IEEE International Conference on Big Data (Big Data) 2021
DOI: 10.1109/bigdata52589.2021.9671991
|View full text |Cite
|
Sign up to set email alerts
|

Soft-Sensing ConFormer: A Curriculum Learning-based Convolutional Transformer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…Soft Sensing Transformer (SST) [20] demonstrates the similarities between sensor readings and text data, and applies Transformer encoder [3] into this task. ConFormer [21] integrates the structures of both CNN and Transformer. Soft Sensing Model Visualization [22] put more focus on model interpretation.…”
Section: Related Workmentioning
confidence: 99%
“…Soft Sensing Transformer (SST) [20] demonstrates the similarities between sensor readings and text data, and applies Transformer encoder [3] into this task. ConFormer [21] integrates the structures of both CNN and Transformer. Soft Sensing Model Visualization [22] put more focus on model interpretation.…”
Section: Related Workmentioning
confidence: 99%
“…As a result, features are chosen [16] and methodslikePCA (principal component analysis) [17] or LDA (linear discriminant analysis) [18] are typically employed to reduce dimensionality of these features. In addition, [19], for example, used data entropy to preprocess raw time series data.RPCA (Recursive PCA) [20], DPCA (dynamic PCA) [21] and KPCA (kernel PCA) [22] are used to monitor a variety of industrial processes, including adaptive, dynamic, and nonlinear processes [24]. Two RPCA algorithms were published in [25] to adapt for regular process changes in semiconductor production operations.…”
Section: Related Workmentioning
confidence: 99%
“…One such model, Soft-sensing Transformer (SST) [21], utilizes a Transformer encoder [3] to demonstrate the similarities between sensor readings and text data. Another model, ConFormer [22], leverages multi-head convolution modules to achieve fast and lightweight operations while still being able to learn robust representations through multi-head design, similar to transformers. Soft-sensing Model Visualization [23] fine-tunes the model by adjusting the weights of input features based on misclassified examples.…”
Section: Related Workmentioning
confidence: 99%