2022
DOI: 10.1007/s00146-022-01534-8
|View full text |Cite
|
Sign up to set email alerts
|

The psychological and ethological antecedents of human consent to techno-empowerment of autonomous office assistants

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 70 publications
0
4
0
Order By: Relevance
“…But the transfer of autonomy to machines is subject to certain conditions. Similar to the use and gratification theory and the technology acceptance model, existing research indicates that more positive attitudes and higher trust, perceived usefulness, and perceived ease of use are correlated with higher intention to allow the autonomous assistant independence in decision-making ( Modliński, 2022 : 13). Moreover, the more human-like a non-human agent is, the higher the intention to empower it—but only if this agent simultaneously provides functional and visual anthropomorphic cues explainable by the mimicry effect ( Modliński, 2022 : 15).…”
Section: Literature Reviewmentioning
confidence: 99%
“…But the transfer of autonomy to machines is subject to certain conditions. Similar to the use and gratification theory and the technology acceptance model, existing research indicates that more positive attitudes and higher trust, perceived usefulness, and perceived ease of use are correlated with higher intention to allow the autonomous assistant independence in decision-making ( Modliński, 2022 : 13). Moreover, the more human-like a non-human agent is, the higher the intention to empower it—but only if this agent simultaneously provides functional and visual anthropomorphic cues explainable by the mimicry effect ( Modliński, 2022 : 15).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Sofge et al l (2013) suggest that these differences depend on human perception and the type of decision that machines would ultimately make. Trust, perceived usefulness, and ease of use positively correlate to letting the autonomous agent make independent decisions (Modliński, 2022). In addition, previous experience and education level may influence a person's approval of an agent's autonomy (Madhaven and Wiegmann, 2007).…”
Section: Social Perception Of Autonomous Agents (Aas)mentioning
confidence: 99%
“…For example, extroverted people seem more willing to let the agent make independent decisions than introverted people (Goldbach et al, 2019). Recent research suggests that people are reluctant to let AAs make decisions in driving, legal matters, medical treatments, and military actions, regardless of their (positive/negative) outcomes, which could be related to human-machine trans role conflict (Modliński et al, 2022). However, the same authors found that this resistance decreases as AAs' perceived experience and expertise increase (Bigman and Gray, 2018).…”
Section: Social Perception Of Autonomous Agents (Aas)mentioning
confidence: 99%
“…When they use AI technology, their power to make decisions decreases, and they become more dependent on technology [72]. This dependency further increases the dependency on AI technology and decreases the use of mental power [78]. AI is also making people lazy as many tasks are performed automatically etc., [79].…”
Section: Ethical Concerns About Ai Technology In Educationmentioning
confidence: 99%