The ubiquity of systems using artificial intelligence or "AI" has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before-applications range from clinical decision support to autonomous driving and predictive policing. That said, our AIs continue to lag in common sense reasoning [McCarthy, 1960], and thus there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems [Bostrom, 2003, Amodei et al., 2016, Sculley et al., 2014.How can we take advantage of what AI systems have to offer, while also holding them accountable? In this work, we focus on one tool: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation [Goodman and Flaxman, 2016, Wachter et al., 2017a], and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems.Below, we briefly review current societal, moral, and legal norms around explanation, and then focus on the different contexts under which explanation is currently required under the law. We find that there exists great variation around when explanation is demanded, but there also exist important consistencies: when demanding explanation from humans, what we typically want to know is whether and how certain input factors affected the final decision or outcome.These consistencies allow us to list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans under the law. Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should generally be technically feasible but may sometimes be practically onerous-there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard.
No abstract
In January 2015, the Global Network of Internet & Society Research Centers (NoC) published the results of a globally coordinated, independent academic research project exploring multistakeholder governance models. Facilitated by the Berkman Center for Internet & Society at Harvard University, the work evaluated a wide range of governance groups with the goal of contributing meaningfully to the current debate around the future of the Internet governance ecosystem. The report, entitled Multistakeholder as Governance Groups: Observations From Case Studies, included twelve case studies of real-world governance structures from around the world and from both inside and outside the sphere of Internet governance. The report also included a synthesis paper, which drew from the case studies lessons that challenged conventional thinking with respect to the formation, operation, and critical success factors of governance groups. Through its work, the Network of Centers hopes to demonstrate new strategies and approaches for academia regarding its roles in research, facilitation and convening, and education in and communication about the Internet age. This ambition includes creating outputs that are useful, actionable, and timely for policymakers and stakeholders. In that spirit, this document is intended to help translate our original report into a form useful for those creating, convening, or leading governance groups. It is our goal that this document can provide an operational starting place for those who wish to learn more about some of the components critical to the success of a governance group. The original report goes into far greater depth on both the details of the case studies and the lessons learned from them, whereas this document highlights only a few of the points most relevant for operationalizing the findings of the full report.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.