Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems 2023
DOI: 10.1145/3544548.3581025
|View full text |Cite
|
Sign up to set email alerts
|

Knowing About Knowing: An Illusion of Human Competence Can Hinder Appropriate Reliance on AI Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(5 citation statements)
references
References 62 publications
0
3
0
1
Order By: Relevance
“…Reliance and Its Appropriateness We measure the reliance of participants on the AI system via two metrics [22,50]: User Experience We measured participants' perceived autonomy [13], mental demand [21,30,33], perceived complexity [8], engagement [6], future use [30,31], satisfaction [8,19], perceived helpfulness [7,9,28], trust [48], self-efficacy [23] via 7-point Likert scales. The detailed questions are shown in Table 1.…”
Section: Measurements and Analysis Methodsmentioning
confidence: 99%
“…Reliance and Its Appropriateness We measure the reliance of participants on the AI system via two metrics [22,50]: User Experience We measured participants' perceived autonomy [13], mental demand [21,30,33], perceived complexity [8], engagement [6], future use [30,31], satisfaction [8,19], perceived helpfulness [7,9,28], trust [48], self-efficacy [23] via 7-point Likert scales. The detailed questions are shown in Table 1.…”
Section: Measurements and Analysis Methodsmentioning
confidence: 99%
“…With a similar idea to measuring quality, we embed error-based metrics including E LLM and AE LLM which compute the error and absolute error between crowd workers and LLM, respectively. In addition, we apply Agreement Fraction (He, Kuiper, and Gadiraju 2023), which calculates the rate of crowd workers' decisions that agree with the LLM's advice. Following previous research (Tolmeijer et al 2022), we consider two validated questionnaires using Likert scales to measure the subjective trust in LLM: Trust in Automation (TiA) (Körber 2019) and Affinity for Technology Interaction Scale (ATI) (Franke, Attig, and Wessel 2019).…”
Section: Methodsmentioning
confidence: 99%
“…In addition, we compose each unit using 4 statements that are correctly labeled by GPT-3.5, while 2 statements were incorrectly labeled to mimic the actual GPT-3.5's accuracy on our dataset. The order of the 6 statements is then shuffled for every crowd worker and every task to remove potential learning effects (He, Kuiper, and Gadiraju 2023).…”
Section: Crowdsourcing Task Designmentioning
confidence: 99%
“…Zeitschrift für Theorie und Praxis der Medienbildung Medien www.medienpaed.com > 24.03.2024 und verknüpften Fähigkeiten zu beobachten und kann zu einer Überschätzung der eigenen Fähigkeiten, verbunden mit einer Unterschätzung der Risiken führen (He, Kuiper, und Gadiraju 2023).…”
Section: Pädagogikunclassified