In experiments, researchers commonly allocate subjects randomly and equally to the different treatment conditions before the experiment starts. While this approach is intuitive, it means that new information gathered during the experiment is not utilized until after the experiment has ended. Based on methodological approaches from other scientific disciplines such as computer science and medicine, we suggest machine learning algorithms for subject allocation in experiments. Specifically, we discuss a Bayesian multi-armed bandit algorithm for randomized controlled trials and use Monte Carlo simulations to compare its efficiency with randomized controlled trials that have a fixed and balanced subject allocation. Our findings indicate that a randomized allocation based on Bayesian multi-armed bandits is more efficient and ethical in most settings. We develop recommendations for researchers and discuss the limitations of our approach.
Algorithms might prevent prejudices and increase objectivity in personnel selection decisions, but they have also been accused of being biased. We question whether algorithm-based decision-making or providing justifying information about the decision-maker (here: to prevent biases and prejudices and to make more objective decisions) helps organizations to attract a diverse workforce. In two experimental studies in which participants go through a digital interview, we find support for the overall negative effects of algorithms on fairness perceptions and organizational attractiveness. However, applicants with discrimination experiences tend to view algorithm-based decisions more positively than applicants without such experiences.We do not find evidence that providing justifying information affects applicantsregardless of whether they have experienced discrimination or not.
Machine‐learning algorithms used in personnel selection are a promising avenue for several reasons. We shift the focus to applicants' attributions about the reasons why an organization uses algorithms. Combining the human resources attributions model, signaling theory, and existing literature on the perceptions of algorithmic decision‐makers, we theorize that using algorithms affects internal attributions of intent and, in turn, organizational attractiveness. In two experiments (N = 259 and N = 342), including a concurrent double randomization design for causal mediation inferences, we test our hypotheses in the applicant screening stage. The results of our studies indicate that control‐focused attributions about personnel selection (cost reduction and applicant exploitation) are much stronger when algorithms are used, whereas commitment‐focused attributions (quality enhancement and applicant well‐being) are much stronger when human experts make selection decisions. We further find that algorithms have a large negative effect on organizational attractiveness that can be partly explained by these attributions. Implications for practitioners and academics are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.