Consumers' concerns about how companies gather and use their personal data can impede the widespread adoption of artificial intelligence (AI) technologies. This study demonstrates that mechanistic explanations of AI algorithms can inhibit such data collection concerns. Four independent online experiments show a negative effect of detailed mechanistic explanations on data collection concerns (Studies 1a and 1b), as well as mediating influences of a subjective understanding of how AI algorithms work (Study 2) and increased the likelihood to adopt AI technologies after data collection concerns have been mitigated (Study 3). These findings contribute to research on consumer privacy concerns and the adoption of AI technologies, by identifying (1) a new inhibitor of data collection concerns, namely, mechanistic explanations of AI algorithms; (2) the psychological mechanisms underlying mechanist explanation effects; and (3) how diminished data collection concerns promote AI technology adoption. These insights can help companies design more effective communication strategies that reduce the perceived opacity of AI algorithms, reassure consumers, and encourage their adoption of AI technologies.
Presently, most business-to-consumer interaction uses consumer profiling to elaborate and deliver personalized products and services. It has been observed that these practices can be welfare-enhancing if properly regulated. At the same time, risks related to their abuses are present and significant, and it is no surprise that in recent times, personalization has found itself at the centre of the scholarly and regulatory debate. Within currently existing and forthcoming regulations, a common perspective can be found: given the capacity of microtargeting to potentially undermine consumers’ autonomy, the success of the regulatory intervention depends primarily on people being aware of the personality dimension being targeted. Yet, existing disclosures are based on an individualized format, focusing solely on the relationship between the professional operator and its counterparty; this approach operates in contrast to sociological studies that consider interaction and observation of peers to be essential components of decision making. A consideration of this “relational dimension” of decision making is missing both in consumer protection and in the debate on personalization. This article defends that consumers’ awareness and understanding of personalization and its consequences could be improved significantly if information was to be offered according to a relational format; accordingly, it reports the results of a study conducted in the streaming service market, showing that when information is presented in a relational format, people’s knowledge and awareness about profiling and microtargeting are significantly increased. The article further claims the potential of relational disclosure as a general paradigm for advancing consumer protection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.