Research to date aimed at the fairness, accountability, and transparency of algorithmic systems has largely focused on topics such as identifying failures of current systems and on technical interventions intended to reduce bias in computational processes. Researchers have given less attention to methods that account for the social and political contexts of specific, situated technical systems at their points of use. Co-developing algorithmic accountability interventions in communities supports outcomes that are more likely to address problems in their situated context and re-center power with those most disparately affected by the harms of algorithmic systems. In this paper we report on our experiences using participatory and co-design methods for algorithmic accountability in a project called the Algorithmic Equity Toolkit. The main insights we gleaned from our experiences were: (i) many meaningful interventions toward equitable algorithmic systems are non-technical; (ii) community organizations derive the most value from localized materials as opposed to what is "scalable" beyond a particular policy context; (iii) framing harms around algorithmic bias suggests that more accurate data is the solution, at the risk of missing deeper questions about whether some technologies should be used at all. More broadly, we found that community-based methods are important inroads to addressing algorithmic harms in their situated contexts.
Recent concern about harms of information technologies motivate consideration of regulatory action to forestall or constrain certain developments in the field of artificial intelligence (AI). However, definitional ambiguity hampers the possibility of conversation about this urgent topic of public concern. Legal and regulatory interventions require agreed-upon definitions, but consensus around a definition of AI has been elusive, especially in policy conversations. With an eye towards practical working definitions and a broader understanding of positions on these issues, we survey experts and review published policy documents to examine researcher and policy-maker conceptions of AI. We find that while AI researchers favor definitions of AI that emphasize technical functionality, policy-makers instead use definitions that compare systems to human thinking and behavior. We point out that definitions adhering closely to the functionality of AI systems are more inclusive of technologies in use today, whereas definitions that emphasize human-like capabilities are most applicable to hypothetical future technologies. As a result of this gap, ethical and regulatory efforts may overemphasize concern about future technologies at the expense of pressing issues with existing deployed technologies.
Motivated by the extensive documented disparate harms of artificial intelligence (AI), many recent practitioner-facing reflective tools have been created to promote responsible AI development. However, the use of such tools internally by technology development firms addresses responsible AI as an issue of closed-door compliance rather than a matter of public concern. Recent advocate and activist efforts intervene in AI as a public policy problem, inciting a growing number of cities to pass bans or other ordinances on AI and surveillance technologies. In support of this broader ecology of political actors, we present a set of reflective tools intended to increase public participation in technology advocacy for AI policy action. To this end, the Algorithmic Equity Toolkit (the AEKit) provides a practical policy-facing definition of AI, a flowchart for * † ‡ Contributed equally.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.