Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems 2019
DOI: 10.1145/3290605.3300831
|View full text |Cite
|
Sign up to set email alerts
|

Designing Theory-Driven User-Centric Explainable AI

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
473
2
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 578 publications
(483 citation statements)
references
References 64 publications
6
473
2
2
Order By: Relevance
“…In light of this, a recent wave of HCI research has studied what end-users actually desire to understand about ML systems, and how that transparency affects user attitudes and outcomes. Domains of study include recommender systems [4], medicine [12,75], social media [22], creativity [14], and advertisements [23]. While the majority of this work has tended to focus on explaining the reasoning behind specific model decisions, our work instead examines the broader questions that a user may desire to ask of the system as a whole, including components of the ML pipeline that may occur even before a model is built (e.g., data collection, or selection of model design goals).…”
Section: Algorithmic Transparencymentioning
confidence: 99%
“…In light of this, a recent wave of HCI research has studied what end-users actually desire to understand about ML systems, and how that transparency affects user attitudes and outcomes. Domains of study include recommender systems [4], medicine [12,75], social media [22], creativity [14], and advertisements [23]. While the majority of this work has tended to focus on explaining the reasoning behind specific model decisions, our work instead examines the broader questions that a user may desire to ask of the system as a whole, including components of the ML pipeline that may occur even before a model is built (e.g., data collection, or selection of model design goals).…”
Section: Algorithmic Transparencymentioning
confidence: 99%
“…The concept of explaining has been studied for a long time by research disciplines other than information systems or computer science, such as social sciences [14,32]. Following those lenses, Miller [33] defines explanations in XAI as an answer to a question an explainee would have to an explainer, which can be a why-question such as "Why is that transaction marked as a fraud?".…”
Section: Related Workmentioning
confidence: 99%
“…In addition, while it is essential to develop those methods, researchers from the social perspective highlight XAI research tended to adopt particular notions of what is a good explanation, not considering, for instance, the usability and causability requirements of stakeholders for understanding explanations [14,15,16,17]. Such requirements are essential as they enable an understanding of the quality of explanations associated with a human agent's properties and his cognitive and intelligence capabilities for working with AI models [18].…”
Section: Introductionmentioning
confidence: 99%
“…Semantic understanding includes not only the user's speaking intention, but also the acquisition of content, so as to form a complete interactive link (Dove et al 2017). In addition, in the interactive design, some fragments of hesitating and waiting sub-linguistic phenomena as well as humorous sentence corpus, such as "hmm", "uh" and "tell jokes", are timely added through the patchwork synthesis system to make the dialogue more like a real person in a natural conversation (Wang et al 2019). In the field of design creativity, AI has also made many explorations, such as Ali Luban system, Looka, Wix and so on.…”
Section: Related Workmentioning
confidence: 99%