2017
DOI: 10.1007/s11257-017-9195-0
|View full text |Cite
|
Sign up to set email alerts
|

A systematic review and taxonomy of explanations in decision support and recommender systems

Abstract: With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems.A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, n… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

3
215
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 295 publications
(218 citation statements)
references
References 233 publications
(182 reference statements)
3
215
0
Order By: Relevance
“…This is in particular the case in scenarios of complex human decisionmaking, where accountability and explainability are important properties, such as health. Explanations have been explored for several years in the context of recommender systems (Nunes and Jannach 2017;Tintarev and Masthoff 2008), and different types of side information have been used in existing explanation approaches. While existing approaches often use side information to highlight features of the recommended items (Gedikli et al 2014;Vig et al 2009), one area of future work could be to use side information to explain the internal mechanisms that guide otherwise black-box-like algorithms.…”
Section: Transparent Recommendationsmentioning
confidence: 99%
“…This is in particular the case in scenarios of complex human decisionmaking, where accountability and explainability are important properties, such as health. Explanations have been explored for several years in the context of recommender systems (Nunes and Jannach 2017;Tintarev and Masthoff 2008), and different types of side information have been used in existing explanation approaches. While existing approaches often use side information to highlight features of the recommended items (Gedikli et al 2014;Vig et al 2009), one area of future work could be to use side information to explain the internal mechanisms that guide otherwise black-box-like algorithms.…”
Section: Transparent Recommendationsmentioning
confidence: 99%
“…All of these aims can be interrelated, e.g., an explanation that increases transparency by clarifying how a recommendation was chosen could also increase users' trust in the system. Recent work attempts to classify the types of explanations found in a recommendation context, emphasizing the growing need to endow recommendation systems with the right explanation model [21,22,37].…”
mentioning
confidence: 99%
“…Many existing recommendation systems already generate explanations, and several attempts have been made to classify these explanations [21,22,37]. In item-based explanations, the system relies on the previous recommendation's outcome to justify the current recommendation: "I have recommended X because you previously liked/bought Y. "…”
mentioning
confidence: 99%
“…Facebook has been updating this ads explanation feature; the most recent update in February 2019 claimed to reveal more information about the means by which a brand is distributing the ad (Hutchinson, 2019 Providing explanations to help users gain some understandings about the inner workings of a complex information system is not new. Research on the explanation of recommender systems has attracted lot of attention in recent years (Nunes & Jannach, 2017). Explanations of a recommender system can be constructed for various purposes: to justify the recommendations (Bobadilla, Ortega, Hernando, & Gutiérrez, 2013), to inspire trust (Pu & Chen, 2007), to provide information for evaluation (Cleger, Fernández-Luna, & Huete, 2014).…”
Section: Micro-targeted Ads and Recommender System Explanationsmentioning
confidence: 99%