“…It was found that trust in a machine advisor is modulated by general factors like its perceived expertise (Muir & Morray, 1996;Muir & Morray, 1994;Liu, 2010), but also factors that relate to its similarity to humans such as its human-like appearance (Smith, Allaham, & Wiese, 2016;Madhavan & Wiegmann, 2007) or sensitivity to the advisee's emotional states (Picard, 1997;Goetz et al, 2003). In particular, it was shown that agents with a humanlike physique receive higher trust ratings than mechanistic looking agents (Smith et al, 2016;Madhavan & Wiegmann, 2007), and that participants comply more with instructions given by human-like agents as opposed to machine-like agents (Goetz et al, 2003). Interestingly, however, when different agent features, such as the capacity to feel sensations such as hunger (i.e., experience) and the capacity to plan (i.e., agency), communicated inconsistent levels of humanness (e.g., agent seemed able to plan, but unable to feel), participants had negative attitudes towards the advisor and showed low levels of trust (although at least one of the features was high in humanness; Gray & Wegner, 2012).…”