Trust is essential in individuals' perception, behavior, and evaluation of intelligent agents. Indeed, it is the primary motive for people to accept new technology. Thus, it is crucial to repair trust in the event when it is damaged. This study investigated how intelligent agents should apologize to recover trust and how the effectiveness of the apology is different when the agent is humanlike compared to machine-like based on two seemingly competing frameworks of the CASA (Computers-Are-Social-Actors) paradigm and automation bias. A 2 (agent: Human-like vs. Machine-like) X 2 (apology attribution: Internal vs. External) between-subject design experiment was conducted (N = 193) in the context of the stock market. Participants were presented with a scenario in which they were supposed to make investment choices with the help of an artificial intelligence agent's advice. To see the trajectory of initial trust-building, trust violation, and trust repair process, we designed an investment game that consists of 5 rounds of 8 investment choices (in total, 40 investment choices). The results show that trust was repaired more efficiently when a human-like agent apologizes with internal compared to external attribution. However, the opposite pattern was observed among participants who had machine-like agents; the external compared to internal attribution condition showed better trust repair. Both theoretical and practical implications are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.