“…What manifests from these pre-existing expectations is a paradigm in which human decision makers perceive and respond to advice generated by algorithms differently than advice generated by humans, even if the advice itself is identical. Various mechanisms underlying this difference in response are demonstrated throughout the literature, such as the tendency for humans to seek a social or parasocial relationship with the source of advice (Alexander, Blinder, & Zak, 2018;Önkal, Goodwin, Thomson, Gonul, & Pollock, 2009;Prahl & Van Swol, 2017), the persistent belief that human error is random and repairable whereas algorithmic error is systematic (Dietvorst et al, 2015;Dietvorst, Simmons, & Massey, 2016;Highhouse, 2008b), experts' domain confidence leading to underutilization of seemingly unnecessary algorithmic aids (Arkes, Dawes, & Christensen, 1986;Ashton, Ashton, & Davis, 1994), or a lack of training preventing a human user from properly utilizing an algorithmic aid (Mackay & Elam, 1992; cf. Green & Hughes, 1986).…”