Research suggests that algorithms-based on artificial intelligence or linear regression models-make better predictions than humans in a wide range of domains. Several studies have examined the degree to which people use algorithms. However, these studies have been mostly cross-sectional and thus have failed to address the dynamic nature of algorithm use. In the present paper, we examined algorithm use with a novel longitudinal approach outside the lab. Specifically, we conducted two ecological momentary assessment studies in which 401 participants made financial predictions for 18 days in two tasks. Relying on the judge-advisor system framework, we examined how time interacted with advice source (human vs. algorithm) and advisor accuracy to predict advice taking. Our results showed that when the advice was inaccurate, people tended to use algorithm advice less than human advice across the period studied. Inaccurate algorithms were penalized logarithmically; the effect was initially strong but tended to fade over time. This suggests that first impressions are crucial and produce significant changes in advice taking at the beginning of the interaction, which later tends to stabilize as days go by. Therefore, inaccurate algorithms are more likely to accrue a negative reputation than inaccurate humans, even when having the same level of performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.