2022
DOI: 10.1080/10447318.2022.2138826
|View full text |Cite
|
Sign up to set email alerts
|

A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective

Abstract: User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption. It has been suggested that AI-enabled systems must go beyond technical-centric approaches and towards embracing a more human-centric approach, a core principle of the human-computer interaction (HCI) field. This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies to gather insight for f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
18
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 52 publications
(19 citation statements)
references
References 78 publications
1
18
0
Order By: Relevance
“…Trust and transparency do not manifest uniformly across all regions and societies, as they vary depending on the regional and societal context (Robinson, 2020; Bach et al, 2022; Wilson, 2022). These values have undergone evolution and display distinct characteristics, ultimately shaping the prevailing stances toward governments and their institutions.…”
Section: Discussionmentioning
confidence: 99%
“…Trust and transparency do not manifest uniformly across all regions and societies, as they vary depending on the regional and societal context (Robinson, 2020; Bach et al, 2022; Wilson, 2022). These values have undergone evolution and display distinct characteristics, ultimately shaping the prevailing stances toward governments and their institutions.…”
Section: Discussionmentioning
confidence: 99%
“…However, linking definitions of trust, principles for the development of trustworthy AI systems, and user perspectives on trust is an ongoing area of research. This is because user trust is context specific, and needs to be addressed for the specific domains in which user-AI systems exist ( 48 ). Therefore, addressing trust from the perspective of users is open to future research within the IS domain, particularly the clinical laboratory, where significant potential for human-AI teams exists.…”
Section: Discussionmentioning
confidence: 99%
“…However, in the context of AI, this trust may become a liability if users become complacent, overlooking the potential for AI responses to degrade as the data landscape shifts. User trust in such AI and the extent to which they perceive it to be a helpful decision-making assistant depends on multiple factors such as socio-ethical considerations, technical and design features, user characteristics, and expertise [9, 10]. When users are well-versed in the mechanics of ChatGPT and the principles guiding its responses, they can navigate its capabilities with discernment, appropriately integrating it into their decision-making processes.…”
Section: Introductionmentioning
confidence: 99%