2021
DOI: 10.1002/ppap.202100096
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning with Explainable Artificial Intelligence Vision for Characterization of Solution Conductivity Using Optical Emission Spectroscopy of Plasma in Aqueous Solution

Abstract: This study presents an explainable artificial intelligence (XAI) vision for optical emission spectroscopy (OES) of plasma in aqueous solution. We aim to characterize the plasma and OES with XAI. Trained with 18000 spectra, a multilayer artificial neural network (ANN) model accurately predicted the solution conductivity. Local interpretable model-agnostics explanations (LIME), an XAI method, interpreted the model through perturbing spectral features and fitting the feature contribution with a linear model. LIME… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 43 publications
0
8
0
Order By: Relevance
“…As an example, local interpretable model-agnostic explanations (LIME) have been suggested for identifying interpretable models that locally approximate the nonlinear behavior of any underlying model 181 . While this is possibly an underexplored aspect in the context of LTP, an example where LIME relates to the data-driven classification and interpretation of OES experiments of plasma in aqueous solution has been proposed 182 . Further works by Lundberg et al 183 .…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…As an example, local interpretable model-agnostic explanations (LIME) have been suggested for identifying interpretable models that locally approximate the nonlinear behavior of any underlying model 181 . While this is possibly an underexplored aspect in the context of LTP, an example where LIME relates to the data-driven classification and interpretation of OES experiments of plasma in aqueous solution has been proposed 182 . Further works by Lundberg et al 183 .…”
Section: Methodsmentioning
confidence: 99%
“…181 While this is possibly an underexplored aspect in the context of LTP, an example where LIME relates to the datadriven classification and interpretation of OES experiments of plasma in aqueous solution has been proposed. 182 Further works by Lundberg et al 183 have included LIME and other methods in an extension Shapley additive explanations. It is similarly based on local explanation models but defines additive feature attribution methods to unify several of the included interpretable methods.…”
Section: Explainable Aimentioning
confidence: 99%
“…Although pollen classification models are yet to be unveiled by xAI, applications build upon principles of chemistry, physics or spectroscopy can demonstrate potential benefits of this methodology in the broader context. For example xAI for optical emission spectroscopy 41 in plasma-based processes unveil why model made certain predictions thus allowing to characterize the plasma and the spectra. Study of Gomez-Fernandez et al 42 examined whether domain-specific characteristics are being identified by deep learning models on gamma spectroscopy tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Explainable AI techniques in general have been widely used to explain predictions in financial and chemical time-series data [77,78,79,80] vibrational-based Structural Health Monitoring signals [50], hyperspectral imaging [81] and electrocardiogram data [82]. However, to the best of our knowledge, only one recent work focused on using the model-agnostic method (LIME) to explain the non-linear predictions of spectroscopy data to characterize plasma solution conductivity [29].…”
Section: Related Workmentioning
confidence: 99%
“…Most publications in the field either solely focus on obtaining prediction, for example, applying popular ML methods for octane prediction using infrared spectroscopy [25], or the use of common feature elimination techniques [26] to improve the prediction accuracy. The implementation of explainable black-box models is limited to interpreting functional nearinfrared spectra data in developmental cognitive neuroscience using simple multi-variate analysis [27] and using Local Interpretable Model-Agnostic Explanations (LIME) [28] on optical emission spectroscopy of plasma [29].…”
Section: Introductionmentioning
confidence: 99%