Social development organizations increasingly employ artificial intelligence (AI)-enabled tools to help team members collaborate effectively and efficiently. These tools are used in various team management tasks and activities. Based on the unified theory of acceptance and use of technology (UTAUT), this study explores various factors influencing employees’ use of AI-enabled tools. The study extends the model in two ways: a) by evaluating the impact of these tools on the employees’ collaboration and b) by exploring the moderating role of AI aversion. Data were collected through an online survey of employees working with AI-enabled tools. The analysis of the research model was conducted using partial least squares (PLS), with a two-step model – measurement and structural models of assessment. The results revealed that the antecedent variables, such as effort expectancy, performance expectancy, social influence, and facilitating conditions, are positively associated with using AI-enabled tools, which have a positive relationship with collaboration. It also concluded a significant effect of AI aversion in the relationship between performance expectancy and use of technology. These findings imply that organizations should focus on building an environment to adopt AI-enabled tools while also addressing employees’ concerns about AI.
PurposeWith the increase in the adoption of artificial intelligence (AI)-based decision-making, organizations are facilitating human–AI collaboration. This collaboration can occur in a variety of configurations with the division of labor, with differences in the nature of interdependence being parallel or sequential, along with or without the presence of specialization. This study intends to explore the extent to which humans express comfort with different models human–AI collaboration.Design/methodology/approachSituational response surveys were adopted to identify configurations where humans experience the greatest trust, role clarity and preferred feedback style. Regression analysis was used to analyze the results.FindingsSome configurations contribute to greater trust and role clarity with AI as a colleague. There is no configuration in which AI as a colleague produces lower trust than humans. At the same time, the human distrust in AI may be less about human vs AI and more about the division of labor in which human–AI work.Practical implicationsThe study explores the extent to which humans express comfort with different models of an algorithm as partners. It focuses on work design and the division of labor between humans and AI. The finding of the study emphasizes the role of work design in human–AI collaboration. There is human–AI work design that should be avoided as they reduce trust. Organizations need to be cautious in considering the impact of design on building trust and gaining acceptance with technology.Originality/valueThe paper's originality lies in focusing on the design of collaboration rather than on performance of the team.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.