Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers have taken the problem of algorithmic bias seriously, but the current discussion on algorithmic fairness tends to conceptualize 'fairness' in algorithmic fairness primarily as a technical issue and attempts to implement pre-existing ideas of 'fairness' into algorithms. In this paper, I show that such a view of algorithmic fairness as technical issue is unsatisfactory for the type of problem algorithmic fairness presents. Since decisions on fairness measure and the related techniques for algorithms essentially involve choices between competing values, 'fairness' in algorithmic fairness should be conceptualized first and foremost as a political issue, and it should be (re)solved by democratic communication. The aim of this paper, therefore, is to explicitly reconceptualize algorithmic fairness as a political question and suggest the current discussion of algorithmic fairness can be strengthened by adopting the accountability for reasonableness framework.
It is hard to disagree with the idea of responsible innovation (henceforth, RI), as it enables policymakers, scientists, technology developers, and the public to better understand and respond to the social, ethical, and policy challenges raised by new and emerging technologies. RI has gained prominence in policy agenda in Europe and the United States over the last few years. And, along with its rising importance in policy-making, there is also a burgeoning research literature on the topic. Given the historical context of which RI emerges, it should not be surprising that the current discourse on RI is predominantly based on liberal democratic values. Yet, the bias towards liberal democratic values will inevitably limit the discussion of RI, especially in the cases where liberal democratic values are not taken for granted. As such, there is an urgent need to return to the normative foundation of RI, and to explore the notion of 'responsible innovation' from nonliberal democratic perspectives. Against this background, this paper seeks to demonstrate the problematic consequences of RI solely grounded on or justified by liberal democratic values. This paper will cast the argument in the form of a dilemma to be labelled as The Decent Nonliberal Peoples' Dilemma and use it to illustrate the problems of the Western bias.
A closer look at the theories and questions in philosophy of technology and ethics of technology shows the absence and marginality of non-Western philosophical traditions in the discussions. Although, increasingly, some philosophers have sought to introduce non-Western philosophical traditions into the debates, there are few systematic attempts to construct and articulate general accounts of ethics and technology based on other philosophical traditions. This situation is understandable, for the questions of modern sciences and technologies appear to be originated from the West; at the same time, the situation is undesirable. The overall aim of this paper, therefore, is to introduce an alternative account of ethics of technology based on the Confucian tradition. In doing so, it is hoped that the current paper can initiate a relatively uncharted field in philosophy of technology and ethics of technology.
Cultural differences pose a serious challenge to the ethics and governance of artificial intelligence (AI) from a global perspective. Cultural differences may enable malignant actors to disregard the demand of important ethical values or even to justify the violation of them through deference to the local culture, either by affirming the local culture lacks specific ethical values, e.g., privacy, or by asserting the local culture upholds conflicting values, e.g., state intervention is good. One response to this challenge is the human rights approach to AI governance, which is intended to be a universal and globally enforceable framework. The proponents of the approach, however, have so far neglected the challenge from cultural differences or left out the implications of cultural diversity in their works. This is surprising because human rights theorists have long recognized the significance of cultural pluralism for human rights. Accordingly, the approach may not be straightforwardly applicable in “non-Western” contexts because of cultural differences, and it may also be critiqued as philosophically incomplete insofar as the approach does not account for the (non-) role of culture. This commentary examines the human rights approach to AI governance with an emphasis on cultural values and the role of culture. Particularly, I show that the consideration of cultural values is essential to the human rights approach for both philosophical and instrumental reasons.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.