2019
DOI: 10.1177/1071181319631029
|View full text |Cite
|
Sign up to set email alerts
|

Generational differences in trust in digital assistants

Abstract: Human trust in automation has been studied extensively within safety critical domains (military, aviation, process control, etc.) because harmful consequences are associated with the improper calibration of trust in automated systems in these domains (Parasuraman & Riley, 1997). As such, researchers have worked to identify important factors which help humans build trust in such systems (Hoff & Bashir, 2015). With the explosion of AI in consumer technologies, it is becoming equally critical to understan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 36 publications
0
2
0
Order By: Relevance
“…Additionally, a study found that only Agreeableness of the Big Five was positively correlated with trust in digital assistants. However, the association remained significant in a regression model only in individuals from generation Z [40]. Finally, some studies report associations between personality and trust in/agreement with automation.…”
Section: Introductionmentioning
confidence: 85%
See 1 more Smart Citation
“…Additionally, a study found that only Agreeableness of the Big Five was positively correlated with trust in digital assistants. However, the association remained significant in a regression model only in individuals from generation Z [40]. Finally, some studies report associations between personality and trust in/agreement with automation.…”
Section: Introductionmentioning
confidence: 85%
“…Aside from personal characteristics, however, also different characteristics of the AI product and environment can influence differences in attitudes towards AI; this has already been proposed in the specific context of trusting AI [62]. Moreover, a study on trust in digital assistants reports positive associations between trust in these assistants and perceived reliability and system performance factors [40]; for another model about factors of trustworthy AI (or machine learning, specifically), please see, for example, Toreini et al [63] or the review by Glikson and Woolley [64], which additionally takes into account different kinds of AI (robotic, embedded, etc. ).…”
Section: Discussionmentioning
confidence: 99%