2021
DOI: 10.1038/s42256-021-00298-y
|View full text |Cite
|
Sign up to set email alerts
|

Institutionalizing ethics in AI through broader impact requirements

Abstract: Turning principles into practice is one of the most pressing challenges of artificial intelligence (AI) governance. In this article, we reflect on a novel governance initiative by one of the world's largest AI conferences. In 2020, the Conference on Neural Information Processing Systems (NeurIPS) introduced a requirement for submitting authors to include a statement on the broader societal impacts of their research. Drawing insights from similar governance initiatives, including institutional review boards (IR… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 61 publications
(33 citation statements)
references
References 45 publications
0
25
0
Order By: Relevance
“…Fortunately, companies need not start from scratch: numerous translational mechanisms for AI governance have been proposed and studied [ 55 – 57 ]. These include impact assessments lists [ 58 – 60 ], model cards [ 61 ], datasheets [ 61 – 63 ], as well as human-in-the-loop protocols [ 64 ], standards and reporting guidelines for using AI systems [ 65 – 67 ], and the inclusion of broader impact requirements in software development processes [ 68 ]. 5…”
Section: The Need To Operationalise Ai Governancementioning
confidence: 99%
“…Fortunately, companies need not start from scratch: numerous translational mechanisms for AI governance have been proposed and studied [ 55 – 57 ]. These include impact assessments lists [ 58 – 60 ], model cards [ 61 ], datasheets [ 61 – 63 ], as well as human-in-the-loop protocols [ 64 ], standards and reporting guidelines for using AI systems [ 65 – 67 ], and the inclusion of broader impact requirements in software development processes [ 68 ]. 5…”
Section: The Need To Operationalise Ai Governancementioning
confidence: 99%
“…Identifying and addressing potential biases is an important step in the assessment process. There is currently momentum for AI researchers to include statements about potential societal impacts [225] when submitting their work to journals or conferences. Similar to privacy impact assessments, which are relied upon by data protection and privacy frameworks to gauge and respond to data privacy risks, such impact assessments provide a high-level structure that enables organizations to frame the risks of each algorithm or deployment while also accounting for the specifics of each use case.…”
Section: Impact Assessmentsmentioning
confidence: 99%
“…Other practices to aid the practitioner in envisioning the possible impacts of their work include: staged system roll-out or prototyping to get ahead of any unforeseen issues before full launch, and performing an impact investigation. Impact investigations are worthwhile, though they are neither simple nor straightforward and there are no clear norms (Prunkl et al, 2021;Partnership on AI, 2021).…”
Section: Ethical Considerationsmentioning
confidence: 99%