Human–computer interaction (HCI) is a cornerstone for the success of technical innovation in the logistics and supply chain sector. As a major part of social sustainability, this interaction is changing as artificial intelligence applications (Internet of Things, autonomous transport, Physical Internet) are implemented, leading to larger machine autonomy, and hence the transition from a primary executive to a supervisory role of human operators. A fundamental question concerns the level of control transferred to machines, such as autonomous vehicles and automatic materials handling devices. Problems include a lack of human trust toward automatic decision making or an inclination to override the system in case automated decisions are misperceived. This paper outlines a theoretical framework, describing different levels of acceptance and trust as a key HCI element of technology innovation, and points to the possible danger of an artificial divide at both the individual and firm level. Based upon the findings of four benchmark cases, a classification of the roles of human employees in adopting innovations is developed. Measures at operational, tactical, and strategic level are discussed to improve HCI, more in particular the capacity of individuals and firms to apply state‐of‐the‐art techniques and to prevent an artificial divide, thereby increasing social sustainability.