Proceedings of the 2017 Conference on Designing Interactive Systems 2017
DOI: 10.1145/3064663.3064703
|View full text |Cite
|
Sign up to set email alerts
|

Designing Contestability

Abstract: We describe the design of an automated assessment and training tool for psychotherapists to illustrate challenges with creating interactive machine learning (ML) systems, particularly in contexts where human life, livelihood, and wellbeing are at stake. We explore how existing theories of interaction design and machine learning apply to the psychotherapy context, and identify “contestability” as a new principle for designing systems that evaluate human behavior. Finally, we offer several strategies for making … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 78 publications
(38 citation statements)
references
References 33 publications
0
38
0
Order By: Relevance
“…Furthermore, few papers describe the conduct of empirical studies of an end-to-end ML system [ 78 , 140 ] or assess the quality of ML predictions [ 53 ]. One paper specifically discusses design implications for user-centric, deployable ML systems [ 77 ].…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Furthermore, few papers describe the conduct of empirical studies of an end-to-end ML system [ 78 , 140 ] or assess the quality of ML predictions [ 53 ]. One paper specifically discusses design implications for user-centric, deployable ML systems [ 77 ].…”
Section: Resultsmentioning
confidence: 99%
“…This is needed to better enable laypeople to calibrate their understanding of a system's capabilities and limitations to reduce risks of over-reliance on potentially over-confident predictions [ 109 , 111 ]. To support scrutiny and encourage more careful interpretations of ML interferences, this suggests the need for (i) stronger efforts in supporting peoples' awareness of the probabilistic (rather than deterministic) nature of many ML models, and their likely proneness to errors, (ii) the provision of relevant additional context information and evidence that can help users to affirm, or contest ML outputs [ 77 ], and (iii) the inclusion of opportunities for user input and a strengthening of their role as data controllers [ 7 ] through encouragements to ask questions, to inspect any conclusions that seem unreasonable; and to facilitate the recording of any disagreements with a system (cf. [ 77 ]), or even correct identified errors.…”
Section: Design To Support Appropriate Understanding and Use Of Ml-oumentioning
confidence: 99%
See 1 more Smart Citation
“…[45]. Accordingly, it may be important to adapt future systems to help humans appropriate challenge the evaluations they receive [46]. Ultimately, the improvement of systems such as ClientBot will rely on ongoing “human in the loop” feedback [47], whereby users learn from the system and also provide feedback and insights that serve to make the platform more effective.…”
Section: Discussionmentioning
confidence: 99%
“…While 'control' has not been a central focus in the recent revival of ethics concerns that pertain to algorithmic systems, there is a long history of work in HCI that has examined questions of automation and of designing human controls into AI systems [27][28][29]. Such work often tackles this from a cognitive perspective, arguing for opportunities to enhance (rather than replace) human intelligence through productive human-machine partnerships.…”
Section: Background Related Workmentioning
confidence: 99%