Philosophical orthodoxy holds that pains are mental states, taking this to reflect the ordinary conception of pain. Despite this, evidence is mounting that English speakers do not tend to conceptualize pains in this way; rather, they tend to treat pains as being bodily states. We hypothesize that this is driven by two primary factors -- the phenomenology of feeling pains and the surface grammar of pain reports. There is reason to expect that neither of these factors is culturally specific, however, and thus reason to expect that the empirical findings for English speakers will generalize to other cultures and other languages. In this article we begin to test this hypothesis, reporting the results of two cross-cultural studies comparing judgments about the location of referred pains (cases where the felt location of the pain diverges from the bodily damage) between two groups -- Americans and South Koreans -- that we might otherwise expect to differ in how they understand pains. In line with our predictions, we find that both groups tend to conceive of pains as bodily states.
bias is not only an issue of humanities and social impact and governance, but also of systemic robustness. The algorithm bias has the characteristic of being intervened in the system construction process as the computer becomes an artificial neural network-based autonomous intelligence system. The objective of this paper is to deal with the aspects of bias that are involved in each stage of artificial intelligence, the fairness criterion for the judgment of bias, and the bias mitigation methods. Different types of fairness are difficult to satisfy simultaneously and require different combinations of criteria and factors depending on the field and context of AI application. Each method for mitigating the bias of training data, classifiers, and prediction alone do not completely block the bias, and a balance between bias mitigation and accuracy should be sought. Even if bias is identified through unlimited access to the algorithm through AI auditing, it is difficult to determine whether the algorithm is biased. The bias mitigation technology goes beyond simply removing the bias, and is moving toward solving the problem of both reducing the bias and securing the robustness of the system, and adjusting the various types of fairness. In conclusion, these characteristics imply that policies and education that recognize AI biases and seek solutions should be explored in terms of bias recognition and coordination based on system understanding beyond recognizing issues at the conceptual level.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.