2020
DOI: 10.1016/j.patter.2020.100049
|View full text |Cite
|
Sign up to set email alerts
|

Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI

Abstract: Artificial intelligence (AI) systems hold great promise as decision-support tools, but we must be able to identify and understand their inevitable mistakes if they are to fulfill this potential. This is particularly true in domains where the decisions are high-stakes, such as law, medicine, and the military. In this Perspective, we describe the particular challenges for AI decision support posed in military coalition operations. These include having to deal with limited, low-quality data, which inevitably comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
52
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 97 publications
(53 citation statements)
references
References 64 publications
1
52
0
Order By: Relevance
“…This is relevant regarding our approach, designed to classify past (historical) confessed acts of violence but not future violence. A recent call, 99 although designed for the prediction of future events, is also relevant for our study. Such call emphasizes the importance of considering trust calibration and uncertainty in machine learning.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This is relevant regarding our approach, designed to classify past (historical) confessed acts of violence but not future violence. A recent call, 99 although designed for the prediction of future events, is also relevant for our study. Such call emphasizes the importance of considering trust calibration and uncertainty in machine learning.…”
Section: Discussionmentioning
confidence: 99%
“…Beyond uninterpretable deep learning results, the machine learning analysis includes trust calibration as it allows one to provide explanations for the main predictors (providing interpretability) and requires understanding of the system's capabilities (interpretability) and reliability (uncertainty estimates). 99 Our procedures followed these recommendations at the current dataset level by (1) reducing uninterpretable results (deep learning) with identification of main features (random forest and machine learning PFE); (2) implementing calibration, feature stabilization, and multiple accuracy metrics; (3) avoiding the misidentification of probabilities of individual classification with ontological causality between predictors and outcomes; and (4) connecting the data-driven results with previous theoretical and empirical evidence favoring interpretability and readability. Any interpretation of the present results should consider these analytical and conceptual restrictions.…”
Section: Discussionmentioning
confidence: 99%
“…AI has the potential to revolutionise the integration of human and machine intelligence and will impact how we learn (Seeber et al., 2020). To address concerns and fears associated with technologies, data and AI, LLL policies need to consider issues of ethics, algorithmic trust, privacy and data security (Tomsett et al., 2020) as well as transparent data governance and standards (Luckin et al., 2016).…”
Section: Existing Visions Of Artificial Intelligence For Lifelong Learningmentioning
confidence: 99%
“…The approach aims to help users of intelligent systems build an appropriate trust level by showing users the rationale and reasoning behind an agent recommendation. Although, many studies showed that explanation could indeed improve trust calibration [ 80 ]. However, such studies often assumed that users would engage cognitively with explanations and calibrate their trust.…”
Section: Theoretical Background and Related Workmentioning
confidence: 99%