Artificial intelligence (AI) governance is required to reap the benefits and manage the risks brought by AI systems. This means that ethical principles, such as fairness, need to be translated into practicable AI governance processes. A concise AI governance definition would allow researchers and practitioners to identify the constituent parts of the complex problem of translating AI ethics into practice. However, there have been few efforts to define AI governance thus far. To bridge this gap, this paper defines AI governance at the organizational level. Moreover, we delineate how AI governance enters into a governance landscape with numerous governance areas, such as corporate governance, information technology (IT) governance, and data governance. Therefore, we position AI governance as part of an organization’s governance structure in relation to these existing governance areas. Our definition and positioning of organizational AI governance paves the way for crafting AI governance frameworks and offers a stepping stone on the pathway toward governed AI.
Artificial intelligence (AI) governance and auditing promise to bridge the gap between AI ethics principles and the responsible use of AI systems, but they require assessment mechanisms and metrics. Effective AI governance is not only about legal compliance; organizations can strive to go beyond legal requirements by proactively considering the risks inherent in their AI systems. In the past decade, investors have become increasingly active in advancing corporate social responsibility and sustainability practices. Including nonfinancial information related to environmental, social, and governance (ESG) issues in investment analyses has become mainstream practice among investors. However, the AI auditing literature is mostly silent on the role of investors. The current study addresses two research questions: (1) how companies’ responsible use of AI is included in ESG investment analyses and (2) what connections can be found between principles of responsible AI and ESG ranking criteria. We conducted a series of expert interviews and analyzed the data using thematic analysis. Awareness of AI issues, measuring AI impacts, and governing AI processes emerged as the three main themes in the analysis. The findings indicate that AI is still a relatively unknown topic for investors, and taking the responsible use of AI into account in ESG analyses is not an established practice. However, AI is recognized as a potentially material issue for various industries and companies, indicating that its incorporation into ESG evaluations may be justified. There is a need for standardized metrics for AI responsibility, while critical bottlenecks and asymmetrical knowledge relations must be tackled.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.