A number of governmental and nongovernmental organizations have made significant efforts to encourage the development of artificial intelligence in line with a series of aspirational concepts such as transparency, interpretability, explainability, and accountability. The difficulty at present, however, is that these concepts exist at a fairly abstract level, whereas in order for them to have the tangible effects desired they need to become more concrete and specific. This article undertakes precisely this process of concretisation, mapping how the different concepts interrelate and what in particular they each require in order to move from being high-level aspirations to detailed and enforceable requirements. We argue that the key concept in this process is accountability, since unless an entity can be held accountable for compliance with the other concepts, and indeed more generally, those concepts cannot do the work required of them. There is a variety of taxonomies of accountability in the literature. However, at the core of each account appears to be a sense of “answerability”; a need to explain or to give an account. It is this ability to call an entity to account which provides the impetus for each of the other concepts and helps us to understand what they must each require.
To realise accountable AI systems, different types of information from a range of sources need to be recorded throughout the system life cycle. We argue that knowledge graphs can support capture and audit of such information; however, the creation of such accountability records must be planned and embedded within different life cycle stages, e.g. during the design of a system, during implementation, etc. We propose a provenance based approach to support not only the capture of accountability information, but also abstract descriptions of accountability plans that guide the data collection process, all as part of a single knowledge graph. In this paper we introduce the SAO ontology, a lightweight generic ontology for describing accountability plans and corresponding provenance traces of computational systems; the RAInS ontology, which extends SAO to model accountability information relevant to the design stage of AI systems; and a proof-of-concept implementation utilising the proposed ontologies to provide a visual interface for designing accountability plans, and managing accountability records.
Pervasive systems are increasingly being deployed in new and innovative ways -be it in our homes, vehicles, or public spaces. Such systems have the potential to bring a wide range of benefits, blending advanced functionality with the physical environment. However, these systems also have the potential to drive real-world consequences through decisions, interactions, or actuations, and there is a real risk that their use can lead to harms (physical injuries, financial loss, or even death). These concerns appear ever-more prevalent, as a growing sense of distrust has led to calls for more transparency and accountability surrounding the emerging technologies that increasingly pervade our world.A range of things can-and often do-go wrong, be they technical failure, user error, or otherwise. As such, means to effectively review, understand, and act upon the inner workings of pervasive systems is becoming increasingly important. Means for reviewing and auditing how these systems are built/developed and used are crucial to the ability to determine the cause of failures, prevent re-occurrences, and/or to identify parties at fault. Yet, despite the wider landscape of societal and legal pressures for record keeping and increased accountability, implementing such transparency measures faces a range of challenges. This workshop will bring together a range of perspectives into how we can better audit and understand the complex, sociotechnical systems that increasingly affect us (whether directly or indirectly). From tools for data capture and retrieval, technical/ethical/legal challenges, and early ideas on concepts of relevance -we solicit submissions that help further our understanding of how pervasive systems can be built to be reviewable and auditable, helping them to be more transparent, trustworthy, and accountable. This work is licensed under a Creative Commons Attribution International 4.0 License.
In the original version of this chapter, figure 3 was incorrect. This has been updated in the chapter as seen below. Fig. 3. RAInS classes as subclasses of SAO classes (in blue-filled rectangles). Third party classes reused from ML Schema and Dublin Core vocabulary have green borders.
To enhance trustworthiness of AI systems, a number of solutions have been proposed to document how such systems are built and used. A key facet of realizing trust in AI is how to make such systems accountable -a challenging task, not least due to the lack of an agreed definition of accountability and differing perspectives on what information should be recorded and how it should be used (e.g., to inform audit). Information originates across the life cycle stages of an AI system and from a variety of sources (individuals, organizations, systems), raising numerous challenges around collection, management, and audit. In our previous work, we argued that semantic Knowledge Graphs (KGs) are ideally suited to address those challenges and we presented an approach utilizing KGs to aid in the tasks of modelling, recording, viewing, and auditing accountability information related to the design stage of AI system development. Moreover, as KGs store data in a structured format understandable by both humans and machines, we argued that this approach provides new opportunities for building intelligent applications that facilitate and automate such tasks. In this paper, we expand our earlier work by reporting additional detailed requirements for knowledge representation and capture in the context of AI accountability; these extend the scope of our work beyond the design stage, to also include system implementation. Furthermore, we present the RAInS ontology which has been extended to satisfy these requirements. We evaluate our approach against three popular baseline frameworks, namely, Datasheets, Model Cards, and FactSheets, by comparing the range of information that can be captured by our KGs against these three frameworks. We demonstrate that our approach subsumes and extends the capabilities of the baseline frameworks and discuss how KGs can be used to integrate and enhance accountability information collection processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.