We are already living in an algorithmic society. AI policies and regulations are now emerging at the same time as more is learned about the implications of bias in machine learning sets, the surveillance risks of smart cities and facial recognition, and automated decision-making by government, among many other applications of AI and machine learning. Each of these issues raises concerns around ethics, privacy, and data protection. This paper introduces some of the key AI regulatory developments to date and engagement by libraries in these processes. While many AI applications are largely emergent and hypothetical in libraries, some mature examples can be identified in research literature searching, language tools for textual analysis, and access to collection data. The paper presents a summary of how library activities such as these are represented in national AI plans and ways that libraries have engaged with other aspects of AI regulation including the development of ethical frameworks. Based on the sector's expertise in related regulatory issues including copyright and data protection, the paper suggests further opportunities to contribute to the future of ethical, trustworthy, and transparent AI.
Navigating AI definitions, applications, and challengesVarying definitions of artificial intelligence have emerged over the decades. These definitions are summarised as being concerned with systems that think like humans, act like humans, think rationally, and act rationally (Russell & Norvig, 2020). Yet there is no clear consensus among researchers on a particular definition, and AI thus remains somewhat difficult to define (Bringsjord & Govindarajulu, 2020). This is further complicated by the emergence of AI regulation which has spurred the creation of additional policy-focused definitions that differ from those preferred by AI researchers. This has consequences for developing a shared understanding of what AI is, and for developing effective regulation. In a recent study, researchers centre system and technical elements in their definitions, while policymakers focus on how systems compare to human thought or behaviour (Krafft et al., 2020). Krafft suggests that the OECD's policy-focused definition effectively reaches across the domains of research and policy and this definition also frames this paper, "An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing