2021
DOI: 10.1016/j.artint.2021.103471
|View full text |Cite
|
Sign up to set email alerts
|

Using ontologies to enhance human understandability of global post-hoc explanations of black-box models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 85 publications
(38 citation statements)
references
References 37 publications
0
29
0
1
Order By: Relevance
“…This hierarchy of concept could also be used to improve the interpretability and explicability of results. Indeed, understanding how Deep Learning combines information to effectively classify land use classes remains a challenging task, but recent research using ontologies could be useful to achieve this goal [54]. This idea could also highlight missing exogenous information (elevation, cadastre, etc.)…”
Section: Discussionmentioning
confidence: 99%
“…This hierarchy of concept could also be used to improve the interpretability and explicability of results. Indeed, understanding how Deep Learning combines information to effectively classify land use classes remains a challenging task, but recent research using ontologies could be useful to achieve this goal [54]. This idea could also highlight missing exogenous information (elevation, cadastre, etc.)…”
Section: Discussionmentioning
confidence: 99%
“…als Antwort darauf werden zu der Frage geführt, wie leistungsstark Random Forests sein können [7,26]. Eine weitere Alternative stellen Gradient Boosted Trees dar [1] [2,3,21]; die Blackbox-Modelle gehen mit den Entscheidungsbäumen eine symbiotische Beziehung ein, ein spannendes Forschungsfeld mit Potenzial für erklärbare und leistungsstarke Modelle. Die traditionellen Entscheidungsbäume werden ständig weiterentwickelt und stellen ein vielseitiges Werkzeug für moderne KI-Anwendungen dar.…”
Section: Fortgeschrittene Methodenunclassified
“…It is not new for modeling experts to face questions about model acceptability and validation as we expose here, but they were trained in a different context and environment when comparing with the currently era of data scientists exposed to AI. Efforts are being made to promote greater transparency in algorithms with low levels of expert intervention [84][85][86][87][88]. Miller [89] names these efforts as "explainable artificial intelligence research" and considers that there will be an increasing need to integrate AI with other fields of knowledge, such as philosophy, cognitive psychology/science, and social psychology.…”
Section: Decision Makingmentioning
confidence: 99%