How does one do research on algorithms and their outputs when confronted with the inherent algorithmic opacity and black box-ness as well as with the limitations of API-based research and the data access gaps imposed by platforms’ gate-keeping practices? This article outlines the methodological steps we undertook to manoeuvre around the above-mentioned obstacles. It is a “byproduct” of our investigation into datafication and the way how algorithmic identities are being produced for personalisation, ad delivery and recommendation. Following Paßmann and Boersma’s (2017) suggestion for pursuing “practical transparency” and focusing on particular actors, we experiment with different avenues of research. We develop and employ an approach of letting the platforms speak and making the platforms speak. In doing so, we also use non-traditional research tools, such as transparency and regulatory tools, and repurpose them as objects of/for study. Empirically testing the applicability of this integrated approach, we elaborate on the possibilities it offers for the study of algorithmic systems, while being aware and cognizant of its limitations and shortcomings.
If users are being $2 algorithms, and companies and regulators are proposing ways for $2 algorithms, with this paper we would like to discuss and propose a third type of governance — one where users have agency, control and governing power(s) over algorithmic systems and their outputs. Our main research question is how do we enable users to actively govern algorithms, instead of passively being governed by them? And what do the users need in order to be algorithmically governed in such a way that will enable more agency, autonomy and control when interacting with AI systems and their outputs. Instead of getting insights in an abstract way, to answer this question, we opted for a guided and supportive process where participants were able to reflect on the process, formulate and elaborate their insights, thoughts, needs and requirements based on their lived experience, i.e., after a real interaction with these algorithmic systems. We conducted a participatory technographic research with 47 participants, through a multi-stage process consisting of a survey, Subject Access Requests (Article 15 of the General Data Protection Regulation), purposeful interaction with the transparency tools of seven chosen platforms and extensive structured research diaries. A quali-quantitative analysis of the insights enabled us to formulate the participants’ requirements of $2 and $2 in a way that will enable their agency, control and autonomy. These requirements are translatable and implementable at a user-interaction level, via technology design and through regulatory mechanisms
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.