Artificial intelligence (AI) applications are increasingly used in everyday life. Whereas some of them are widely accepted (e.g., automatically compiled playlists), others are highly controversial (e.g., use of AI in the classroom). While public discourse is dominated by perceptions of the risks associated with AI, we take a fundamentally different approach of measuring the perceived risks and opportunities of AI applications considering people's knowledge and confidence in their own knowledge. To this end, we assessed in two studies (N = 394 and N = 437) how knowledge about AI as well confidence in AI knowledge is related to participants risk-opportunity perception of AI scenarios from three domains: media, medicine, and autonomous driving. Results showed that both AI knowledge and confidence in AI knowledge are important predictors regarding people’s risk-opportunity perception beyond people's attitudes towards AI. More specifically, people with more knowledge about AI exhibited a so-called risk blindness in that they were underestimating the risks. On the other hand, higher confidence in ones’ AI knowledge impacted participants opportunity perception. Knowledge and confidence thus open a new dimension of understanding people’s perception of risks and opportunities in AI.
Artificial Intelligence (AI) based applications are an ever-expanding field, with an increasing number of sectors deploying this technology. While previous research has focused on trust in AI applications or familiarity as predictors for AI usage, we aim to expand current research by investigating the influence of knowledge as well as AI risk and opportunity perception as possible predictors for AI usage. To this end, we conducted a study (N= 450, representative for age and gender) covering a broad number of domains (health, eldercare, driving, data processing, and art), assessing well-established variables in AI research (trust, familiarity) as well as knowledge about AI and risk and opportunity assessment. We further investigated the influence of AI use related ratings on AI usage. Results show that the newly investigated variables best predict overall intention to use, above and beyond trust and familiarity. Higher AI-related knowledge, more positive use related ratings, and lower risk perception significantly predict general AI use intention, with a similar trend emerging for domain-specific AI use intention. These findings highlight the relevance of knowledge, risk and opportunity assessment, and use related ratings, in understanding laypeople’s intention to use AI-based applications and open a new roster of research questions in understanding people’s AI use behavior intentions and their perception of AI.
Robots are increasingly present in our society. Their successful integration depends, however, on understanding and fostering pro-social behavior towards robots, in this case, helping. To better understand people’s reported willingness to help robots across different contexts (delivery, medical, service, and security), we conducted two preregistered studies on a German-speaking population (N = 415, and N = 542, representative of age and gender). We assessed attitudes, knowledge about robots, and anthropomorphism and investigated their effect on reported willingness to help. Results show that positive attitudes significantly predicted reported higher willingness to help. Contrary to our hypothesis, having more knowledge about robots increased reported willingness to help. Additionally, we found no effect of anthropomorphism, neither in the form of robot appearance nor as participant’s own view about robots, on reported willingness to help. Furthermore, results point to a context-dependency for willingness to help with participants preferring to help robots in a medical context compared to a security one, for example. Our findings thus highlight the relevance of knowledge and attitudes in understanding helping behavior towards robots. Additionally, our results raise questions about the relevance of anthropomorphism in pro-sociality towards robots.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.