2021 IEEE Seventh International Conference on Big Data Computing Service and Applications (BigDataService) 2021
DOI: 10.1109/bigdataservice52369.2021.00007
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Learning Approach for Short Term Prediction of Industrial Plant Working Status

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…Articles Shapley Additive Explanations (V-A1) [72], [96], [99], [117], [118], [130], [132] [36]- [38], [42], [66], [75]- [77], [120], [131], [134], [135] Local Interpretable Model-agnostic Explanations (V-A2) [35]- [38], [44], [50], [51], [54], [61], [66], [76], [84] Feature Importance (V-A3) [34], [54], [67], [85], [86], [93], [101], [115], [137], [139] Layer-wise Relevance Propagation (V-A4) [37], [44], [68], [87], [109], [116] Rule-based (V-A5) [65], [70], [71], [73] Class Activation Mapping (CAM) and Gradient-weighted CAM (V-B1) [37], [44],…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Articles Shapley Additive Explanations (V-A1) [72], [96], [99], [117], [118], [130], [132] [36]- [38], [42], [66], [75]- [77], [120], [131], [134], [135] Local Interpretable Model-agnostic Explanations (V-A2) [35]- [38], [44], [50], [51], [54], [61], [66], [76], [84] Feature Importance (V-A3) [34], [54], [67], [85], [86], [93], [101], [115], [137], [139] Layer-wise Relevance Propagation (V-A4) [37], [44], [68], [87], [109], [116] Rule-based (V-A5) [65], [70], [71], [73] Class Activation Mapping (CAM) and Gradient-weighted CAM (V-B1) [37], [44],…”
Section: Methodsmentioning
confidence: 99%
“…[56]- [68] CMAPSS [69] Shapley Additive Explanations and Rule Based [35]- [37], [70]-[80] General Machine Faults and Failures [81] Feature Importance, Shapley Additive Explanation, Class Activation Mapping and Local Interpretable Modle-agnostic Explanation [42], [48], [82]- [87] Trains Feature Importance [34], [88]- [93] Gearboxes [94] Shapley Additive Explanations and Interpretable Filters [42], [45], [48], [95] Artificial Dataset Local Interpretable Modle-agnostic Explanation, Shapley Additive Explanation, Counterfactual and Surrogate [44], [96], [97] Hot [38], [70], [106] Lithium-ion Batteries [107] Layer-wise Relevance Propagation [37], [108], [109] Wind Turbines [110] Autoencoder-based Anomaly Root Cause Analysis and Sparse Networks [111], [112] Amusement Park Rides Depth-based Isolation Forrest Feature Importance and Accelerated Model-agnostic Explanations [113], [114] Particle Accelerators Layer-wise Relevance Propagation and Feature importance [115], [116] Chemical plant Shapley Additive Explanations [117], [118] Semi-conductors [119] Shapley Additive Explanations and Knowledgebased [120], [121] Aircraft Fuzzy…”
Section: Datasetsmentioning
confidence: 99%
“…Following the completion of the training phase, the model is put to use for online fault detection. In conclude, the suggested model is used to analyze the well-known Tennessee Eastman process, and the results of using this model are shown [ 33 ].This is the case even if the sample has high inequality. This is exemplified by the fact that the method can still attain this degree of accuracy with some limitations.…”
Section: Related Workmentioning
confidence: 99%