Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems 2020
DOI: 10.1145/3313831.3376615
|View full text |Cite
|
Sign up to set email alerts
|

COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
57
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 63 publications
(57 citation statements)
references
References 65 publications
0
57
0
Order By: Relevance
“…A key challenge is how to construct and communicate the explanation in a manner that places reasonable cognitive load on the explanation consumers. To this end, techniques for presenting explanations visually, selectively, and progressively [54,62,69], and methods for incorporating the consideration of cognitive load into the explanation generation process [1] should be explored. Moreover, new approaches can be developed to increase people's ability in making full use of the information carried in AI explanations.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…A key challenge is how to construct and communicate the explanation in a manner that places reasonable cognitive load on the explanation consumers. To this end, techniques for presenting explanations visually, selectively, and progressively [54,62,69], and methods for incorporating the consideration of cognitive load into the explanation generation process [1] should be explored. Moreover, new approaches can be developed to increase people's ability in making full use of the information carried in AI explanations.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Researchers have proposed many tasks that AI explanations should assist people in. We reviewed these tasks and used two criteria to narrow down the scope of the tasks from which we extracted the desiderata of AI explanations-first, we focused on those tasks related to the ability of human decision makers in making decisions when they are assisted by an AI model; second, we required the tasks to be easily applicable to any kind of decision making context 1 . Based on tasks that satisfy these criteria, we summarized three desiderata of AI explanations as follows:…”
Section: Literature Reviewmentioning
confidence: 99%
“…Yuan et al [ 169 ] trained a deep learning model to predict the scannability of webpage content. Cogam [ 170 ] tried to generate explanations for machine learning models by incorporating the desired cognitive load. Lai et al [ 171 ] created a technique to annotate visualizations according to text descriptions automatically.…”
Section: Classifying Hcml Researchmentioning
confidence: 99%
“…However, an attempt to build a predictable MRI classification network where a change of network’s parameters results in an expected outcome falls into interpretability research. There have been attempts [ 157 , 158 , 170 ] to develop novel interpretability algorithms using human studies to validate if those algorithms achieved the expected results. Isaac et al [ 61 ] studied what matters to the interpretability of an ML system using a human study.…”
Section: Classifying Hcml Researchmentioning
confidence: 99%
“…The need for understanding the inner workings of machine learning methods spurred the research field of eXplainable AI (XAI) to make such models (more) interpretable. However techniques so far are targeted primarily for experts and improving their usability for end users is an active area of research [1,2,41]. Regarding the generation of nudges, we formulate the following research questions:…”
Section: Core Tasksmentioning
confidence: 99%