2021
DOI: 10.6028/nist.ir.8332-draft
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Trust and Artificial Intelligence

Abstract: The artificial intelligence (AI) revolution is upon us, with the promise of advances such as 98 driverless cars, smart buildings, automated health diagnostics and improved security 99 monitoring. In fact, many people already have AI in their lives as "personal" assistants that 100 allow them to search the internet, make phone calls, and create reminder lists through voice 101 commands. Whether consumers know that those systems are AI is unclear. However, reliance on those systems implies that they are deemed t… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 41 publications
(27 citation statements)
references
References 20 publications
0
27
0
Order By: Relevance
“…Trustworthy AI is a governance framework designed to mitigate potential adverse impacts on consumers as AI is poised to profoundly and indelibly change our lives. As mentioned in [17], Trustworthy AI is changing the dynamic between user and system into a relationship.…”
Section: Trustworthy Aimentioning
confidence: 99%
“…Trustworthy AI is a governance framework designed to mitigate potential adverse impacts on consumers as AI is poised to profoundly and indelibly change our lives. As mentioned in [17], Trustworthy AI is changing the dynamic between user and system into a relationship.…”
Section: Trustworthy Aimentioning
confidence: 99%
“…Explainable AI is one of several properties that characterize trust in AI systems [121,127,134]. Other properties include resiliency, reliability, bias, and accountability.…”
Section: Introductionmentioning
confidence: 99%
“…Interpretability technology, including explainability, transparency, understandability, legibility, and intelligibility -generally considered as the ability to understand the internal logic, inner workings, and rationale behind predictions -are widely touted as a critical and necessary tool for trust. NIST, a U.S. national lab that aims to influence technology standards, writes that explainability is necessary to determine that an AI system is trustworthy [71]. An editorial in Nature Biomedical Engineering writes "for trust... opening up algorithms to interpretation is a necessary first step" [2].…”
Section: Introductionmentioning
confidence: 99%