2023
DOI: 10.1016/j.artmed.2023.102545
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of interpretability for deep learning algorithms in EEG emotion recognition: A case study in autism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(2 citation statements)
references
References 35 publications
0
1
0
Order By: Relevance
“…The Explainable AI (XAI) in EEG emotion recognition will be a critical area of future research. Not only will it help researchers validate existing medical knowledge or discover new ones, as Mayor Torres et al (2023) using the explainable deep learning algorithm SincNet to identify high-alpha and beta suppression in EEG signals of individuals with autism spectrum disorders, but it will also increase physicians’ confidence in using deep learning for diagnosis ( Jafari et al, 2023 ).…”
Section: Introductionmentioning
confidence: 99%
“…The Explainable AI (XAI) in EEG emotion recognition will be a critical area of future research. Not only will it help researchers validate existing medical knowledge or discover new ones, as Mayor Torres et al (2023) using the explainable deep learning algorithm SincNet to identify high-alpha and beta suppression in EEG signals of individuals with autism spectrum disorders, but it will also increase physicians’ confidence in using deep learning for diagnosis ( Jafari et al, 2023 ).…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, despite the promising results mentioned above, such studies and other related research often lack interpretability and transparency in their findings [35,36]. It can be problematic to explain how the modern DL models arrived at their conclusions, which raises concerns around potential biases and ethical considerations raised by a novel concept of FACTS (Fairness, Accountability, Confidentiality, Transparency, and Safety) [37] in AI.…”
Section: Introductionmentioning
confidence: 99%