Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems 2020
DOI: 10.1145/3313831.3376624
|View full text |Cite
|
Sign up to set email alerts
|

No Explainability without Accountability

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 81 publications
(22 citation statements)
references
References 25 publications
0
22
0
Order By: Relevance
“…While machine learning measures mainly focus on system performance, human-centered machine learning (HCML) investigates how the users are affected by such systems [22]. Accordingly, research in HCML focuses on users' evaluations of machine learning systems, including aspects such as users' reliance [31] on the system, or trust [40,68] in the system, the perceived explainability [51], interpretability [44], or fairness [20,32] of the system, and the experienced or perceived accuracy of the system [15,19,23,54]. In a recent HCML survey paper, Kaluarachchi et al [22] provide a comprehensive overview of user studies with machine learning systems.…”
Section: Human-centered Evaluation Of Machine Learning Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…While machine learning measures mainly focus on system performance, human-centered machine learning (HCML) investigates how the users are affected by such systems [22]. Accordingly, research in HCML focuses on users' evaluations of machine learning systems, including aspects such as users' reliance [31] on the system, or trust [40,68] in the system, the perceived explainability [51], interpretability [44], or fairness [20,32] of the system, and the experienced or perceived accuracy of the system [15,19,23,54]. In a recent HCML survey paper, Kaluarachchi et al [22] provide a comprehensive overview of user studies with machine learning systems.…”
Section: Human-centered Evaluation Of Machine Learning Systemsmentioning
confidence: 99%
“…While in-person user studies in labs (like in [66]) are rather rare, crowdsourcing tasks are commonly used to collect feedback from the users. Common tasks include labeling text or image data (e.g., in [31]) or assessing given classifications (e.g., in [27,40,48,51]). Often, the classification accuracy is systematically varied to avoid unpredictable behavior by the system [31,40,66].…”
Section: Human-centered Evaluation Of Machine Learning Systemsmentioning
confidence: 99%
“…Xu et al [72] attempted to investigate children's perception of conversational agents available in smart devices. Smith-Renner et al [73] studied how automatically generated explanations of ML models shape users' perceptions of ML models. Völkel et al [74] explored how to mislead chatbots in profiling users.…”
Section: User Studiesmentioning
confidence: 99%
“…Features of AI models addressing the concerns of users to improve the usability and adoptability of AI systems such as explainability, interpretability, privacy, and fairness have been the focus of many HCML related work [6,12,20,26,58,73,76,81,104,178,219]. This is not surprising, given the history of XAI research area dates back to 1980s [220,221].…”
Section: Features Of the Modelsmentioning
confidence: 99%
“…Finally, producing transcripts alongside translations may be framed as producing an explanation (the transcript) alongside the main output (the translation). Research on explainable machine learning systems (Smith-Renner et al, 2020, and references therein) may shed light on desirable properties of these explanation from a usability point of view, as well as questions related to appropriate user interface design.…”
Section: Consistency Vs Accuracymentioning
confidence: 99%