Socially desirable responding (SDR)-the tendency to present oneself overly positive and downplay negative attributes-has frequently been discussed as an issue hampering the validity and interpretation of self-reports (e.g.,
With rapidly decreasing purchase prices of electric vehicles, charging costs are becoming ever more important for the diffusion of electric vehicles as required to decarbonize transport. However, the costs of charging electric vehicles in Europe are largely unknown. Here we develop a systematic classification of charging options, gather extensive market data on equipment cost, and employ a levelized cost approach to model charging costs in 30 European countries (European Union 27, Great Britain, Norway, Switzerland) and for 13 different charging options for private passenger transport. The findings demonstrate a large variance of charging costs across countries and charging options, suggesting different policy options to reduce charging costs. A specific analysis on the impacts and relevance of publicly accessible charging station utilization is performed. The results reveal charging costs at these stations to be competitive with fuel costs at typical utilization rates exhibited already today.
The role of artificial intelligence (AI) in organizations has fundamentally changed from performing routine tasks to supervising human employees. While prior studies focused on normative perceptions of such AI supervisors, employees’ behavioral reactions towards them remained largely unexplored. We draw from theories on AI aversion and appreciation to tackle the ambiguity within this field and investigate if and why employees might adhere to unethical instructions either from a human or an AI supervisor. In addition, we identify employee characteristics affecting this relationship. To inform this debate, we conducted four experiments (total N = 1701) and used two state-of-the-art machine learning algorithms (causal forest and transformers). We consistently find that employees adhere less to unethical instructions from an AI than a human supervisor. Further, individual characteristics such as the tendency to comply without dissent or age constitute important boundary conditions. In addition, Study 1 identified that the perceived mind of the supervisors serves as an explanatory mechanism. We generate further insights on this mediator via experimental manipulations in two pre-registered studies by manipulating mind between two AI (Study 2) and two human supervisors (Study 3). In (pre-registered) Study 4, we replicate the resistance to unethical instructions from AI supervisors in an incentivized experimental setting. Our research generates insights into the ‘black box’ of human behavior toward AI supervisors, particularly in the moral domain, and showcases how organizational researchers can use machine learning methods as powerful tools to complement experimental research for the generation of more fine-grained insights.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.