Proceedings of the 13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics 2022
DOI: 10.1145/3535508.3545547
|View full text |Cite
|
Sign up to set email alerts
|

Self-explaining neural network with concept-based explanations for ICU mortality prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 16 publications
0
3
0
Order By: Relevance
“…This problem is only related to machine learning solutions, but in the field of NLP most of the solutions are based on ML. Unfortunately, current solutions, not only in the field of NLP, which additionally create explanations for taken actions, get worse results than those that do not create explanations [60]. Nevertheless, it is a very interesting direction of research, in terms of development of artificial intelligence itself and its understanding by humans.…”
Section: Hybrid Solutionsmentioning
confidence: 99%
“…This problem is only related to machine learning solutions, but in the field of NLP most of the solutions are based on ML. Unfortunately, current solutions, not only in the field of NLP, which additionally create explanations for taken actions, get worse results than those that do not create explanations [60]. Nevertheless, it is a very interesting direction of research, in terms of development of artificial intelligence itself and its understanding by humans.…”
Section: Hybrid Solutionsmentioning
confidence: 99%
“…Real-time feature attribution methods, like those described in [8], [20], [23], [13], [14], [31], [32], [33], require just one iteration to generate an explanation. This is typically accomplished by training a feature selector model.…”
Section: Other Related Workmentioning
confidence: 99%
“…In this paper, we prefer not to try to explain the decision system as a whole but to apply an appropriate explainability method to the extracted features and possibly apply another method to the inference subsystem. We can observe two basic approaches to the explainability of recognition systems in the literature: the first approach seeks to preserve explainability by defeature (self-explaining systems, explainability by defeature) [12]- [14]. In it, we replace a part of the network (e.g., a classifier) with an explainable system (e.g., a linear system, a logic function).…”
Section: Introductionmentioning
confidence: 99%