The paper presents an approach for implementing inscrutable (i.e., nonexplainable) artificial intelligence (AI) such as neural networks in an accountable and safe manner in organizational settings. Drawing on an exploratory case study and the recently proposed concept of envelopment, it describes a case of an organization successfully “enveloping” its AI solutions to balance the performance benefits of flexible AI models with the risks that inscrutable models can entail. The authors present several envelopment methods—establishing clear boundaries within which the AI is to interact with its surroundings, choosing and curating the training data well, and appropriately managing input and output sources—alongside their influence on the choice of AI models within the organization. This work makes two key contributions: It introduces the concept of sociotechnical envelopment by demonstrating the ways in which an organization’s successful AI envelopment depends on the interaction of social and technical factors, thus extending the literature’s focus beyond mere technical issues. Secondly, the empirical examples illustrate how operationalizing a sociotechnical envelopment enables an organization to manage the trade-off between low explainability and high performance presented by inscrutable models. These contributions pave the way for more responsible, accountable AI implementations in organizations, whereby humans can gain better control of even inscrutable machine-learning models.
Governments are increasingly relying on algorithmic decision-making (ADM) to deliver public services. Recent information systems literature has raised concerns regarding ADM's negative unintended consequences, such as widespread discrimination, which in extreme cases can be destructive to society. The extant empirical literature, however, has not sufficiently examined the destructive effects of governmental ADM. In this paper, we report on a case study of the Australian government's "Robodebt" programme that was designed to automatically calculate and collect welfare overpayment debts from citizens but ended up causing severe distress to citizens and welfare agency staff. Employing perspectives from systems thinking and organisational limits, we develop a research model that explains how a socially destructive government ADM programme was initiated, sustained, and delegitimized. The model offers a set of generalisable mechanisms that can benefit investigations of ADM's consequences. Our findings contribute to the literature of unintended consequences of ADM and demonstrate to practitioners the importance of setting up robust governance infrastructures for ADM programmes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.