"As a society, we are now at a crucial juncture in determining how to deploy AI-based technologies in ways that promote, not hinder, democratic values such as freedom, equality, and transparency." 1 ABSTRACT Emerging across many disciplines are questions about algorithmic ethics -about the values embedded in artificial intelligence and big data analytics that increasingly replace human decisionmaking. Many are concerned that an algorithmic society is too opaque to be accountable for its behavior. An individual can be denied parole or denied credit, fired or not hired for reasons she will never know and cannot be articulated. In the public sector, the opacity of algorithmic decisionmaking is particularly problematic both because governmental decisions may be especially weighty, and because democratically-elected governments bear special duties of accountability. Investigative journalists have recently exposed the dangerous impenetrability of algorithmic processes used in the criminal justice field -dangerous because the predictions they make can be both erroneous and unfair, with none the wiser.We set out to test the limits of transparency around governmental deployment of big data analytics, focusing our investigation on local and state government use of predictive algorithms. It is here, in local government, that algorithmically-determined decisions can be most directly impactful. And it is here that stretched agencies are most likely to hand over the analytics to private vendors, which may make design and policy choices out of the sight of the client agencies, the public, or both. To see just how impenetrable the resulting "black box" algorithms are, we filed 42 open records requests in 23 states seeking essential information about six predictive algorithm programs. We selected the most widely-used and well-reviewed programs, including those developed by for-profit companies, nonprofits, and academic/private sector To do this work, we identified what meaningful "algorithmic transparency" entails. We found that in almost every case, it wasn't provided. Over-broad assertions of trade secrecy were a problem. But contrary to conventional wisdom, they were not the biggest obstacle. It will not usually be necessary to release the code used to execute predictive models in order to dramatically increase transparency. We conclude that publicly-deployed algorithms will be sufficiently transparent only if (1) governments generate appropriate records about their objectives for algorithmic processes and subsequent implementation and validation; (2) government contractors reveal to the public agency sufficient information about how they developed the algorithm; and (3) public agencies and courts treat trade secrecy claims as the limited exception to public disclosure that the law requires. Although it would require a multi-stakeholder process to develop best practices for record generation and disclosure, we present what we believe are eight principal types of information that such records should ideally contain.
Emerging across many disciplines are questions about algorithmic ethics – about the values embedded in artificial intelligence and big data analytics that increasingly replace human decisionmaking. Many are concerned that an algorithmic society is too opaque to be accountable for its behavior. An individual can be denied parole or denied credit, fired or not hired for reasons she will never know and cannot be articulated. In the public sector, the opacity of algorithmic decisionmaking is particularly problematic both because governmental decisions may be especially weighty, and because democratically-elected governments bear special duties of accountability. Investigative journalists have recently exposed the dangerous impenetrability of algorithmic processes used in the criminal justice field – dangerous because the predictions they make can be both erroneous and unfair, with none the wiser. We set out to test the limits of transparency around governmental deployment of big data analytics, focusing our investigation on local and state government use of predictive algorithms. It is here, in local government, that algorithmically-determined decisions can be most directly impactful. And it is here that stretched agencies are most likely to hand over the analytics to private vendors, which may make design and policy choices out of the sight of the client agencies, the public, or both. To see just how impenetrable the resulting “black box” algorithms are, we filed 42 open records requests in 23 states seeking essential information about six predictive algorithm programs. We selected the most widely-used and well-reviewed programs, including those developed by for-profit companies, nonprofits, and academic/private sector partnerships. The goal was to see if, using the open records process, we could discover what policy judgments these algorithms embody, and could evaluate their utility and fairness. To do this work, we identified what meaningful “algorithmic transparency” entails. We found that in almost every case, it wasn’t provided. Over-broad assertions of trade secrecy were a problem. But contrary to conventional wisdom, they were not the biggest obstacle. It will not usually be necessary to release the code used to execute predictive models in order to dramatically increase transparency. We conclude that publicly-deployed algorithms will be sufficiently transparent only if (1) governments generate appropriate records about their objectives for algorithmic processes and subsequent implementation and validation; (2) government contractors reveal to the public agency sufficient information about how they developed the algorithm; and (3) public agencies and courts treat trade secrecy claims as the limited exception to public disclosure that the law requires. Although it would require a multi-stakeholder process to develop best practices for record generation and disclosure, we present what we believe are eight principal types of information that such records should ideally contain.
relationship between the creativity-based view of originality and the ideology of the "romantic author"-the notion that the writer is not merely a craftsman, but "a unique individual uniquely responsible for a unique product." 8 The connections between "romantic author" ideology and the legal rights of authors, however, were explored in the mid-to late-eighteenth century in both Germany 9 and England. 10 Much of the English exploration was in connection with two cases-Millar v. Taylor 11 and Donaldson v. Becket 12-that at least by 1834 were extremely well-known in American legal circles, because Wheaton v. Peters, 13 the momentous first copyright decision of the U.S. Supreme Court, concerned similar issues and occasioned frequent references to Millar and Donaldson by litigants and Justices alike. Yet, as will be detailed below in Part II, 14 none of these debates had any significant influence on the concept of originality in American copyright law before the Civil War. Rather, courts continued to consider works to be original and copyrightable if they were created through the application of independent intellectual labor, even if that labor involved gathering and representing facts rather than expressing anything unique to an author. Justice O'Connor's opinion in Feist Publications, Inc. v. Rural Telephone Service Co., Inc. 15 has focused attention on two Supreme Court cases decided, respectively, in 1879 and 1885: the TradeMark Cases 16 and Burrow-Giles Lithographic Co. v. Sarony. 17 For Justice O'Connor, the TradeMark Cases and Burrow-Giles were the first two cases in which the Supreme Court addressed originality, and articulated from the very beginning exactly same view that Feist itself adopts: that the originality requirement precludes any copyright protection for bare representations of fact, because such representations do not exhibit the creativity required by both the Copyright Act and the Constitution. If these cases were indeed the crucial turning points in the treatment of originality in American copyright law, one could argue that the concept of originality evolved because the Supreme Court had to confront for the first time the issues raised by these two cases: respectively, whether the federal constitution empowered Congress to regulate trademarks, and whether it empowered Congress to grant copyright protection to photographs. This article does not seek to prove that the TradeMark Cases and Burrow-Giles have no place in a history of evolving concepts of originality in U.S. copyright law. It does seek to suggest, however, that those cases do not express and implement a change in conception of originality nearly as clearly as it would appear from their treatment in Feist, and that it is therefore possible that another factor made a major contribution to that change. As for the TradeMark Cases, the language in Justice Miller's opinion for the Court, passages of which are quoted and paraphrased in Feist, is much more equivocal than might at first appear through modern eyes. When Justice Miller seeks to...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.