2011 IEEE 11th International Conference on Data Mining Workshops 2011
DOI: 10.1109/icdmw.2011.83
|View full text |Cite
|
Sign up to set email alerts
|

Fairness-aware Learning through Regularization Approach

Abstract: Abstract-With the spread of data mining technologies and the accumulation of social data, such technologies and data are being used for determinations that seriously affect people's lives. For example, credit scoring is frequently determined based on the records of past credit data together with statistical prediction techniques. Needless to say, such determinations must be socially and legally fair from a viewpoint of social responsibility; namely, it must be unbiased and nondiscriminatory in sensitive featur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
255
0
3

Year Published

2012
2012
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 319 publications
(259 citation statements)
references
References 14 publications
1
255
0
3
Order By: Relevance
“…Note that in our preliminary work [12], we took the approach of replacing X withx s , which is a sample mean vector of x over a set of training samples whose corresponding sensitive feature is equal to s. However, we unfortunately failed to obtain good approximations by this approach. Finally, the prejudice remover regularizer R PR (D, Θ) is…”
Section: Prejudice Removermentioning
confidence: 99%
See 1 more Smart Citation
“…Note that in our preliminary work [12], we took the approach of replacing X withx s , which is a sample mean vector of x over a set of training samples whose corresponding sensitive feature is equal to s. However, we unfortunately failed to obtain good approximations by this approach. Finally, the prejudice remover regularizer R PR (D, Θ) is…”
Section: Prejudice Removermentioning
confidence: 99%
“…Note that we slightly changed this algorithm as described in [12], because the original algorithm may fail to stop.…”
Section: [Y S]mentioning
confidence: 99%
“…In [12], the distribution of the non-protected attributes in the dataset is modified such that the protected attribute cannot be estimated from the nonprotected attributes. Proposed methods for discrimination prevention using algorithm tweaking require some tweak of predictive models [7,9,14,15,18,19,35]. For example, in [18], the authors developed a strategy for relabeling the leaf nodes of a decision tree to make it discrimination-free.…”
Section: Discrimination Preventionmentioning
confidence: 99%
“…These approaches classify discrimination into different types such as group discrimination, individual discrimination, direct and indirect discrimination. Based on that, methods for discrimination prevention have been proposed [1,7,9,[12][13][14][15][17][18][19]21,35,37,43] which either use data preprocessing or algorithm tweaking. However, these works are mainly based on correlation or association-based measures which cannot be used to estimate the causal effect of the protected attributes on the decision.…”
Section: Introductionmentioning
confidence: 99%
“…Proposed methods for discrimination prevention using model adaptation include the tweaking of decision trees [2], naive Bayes classifiers [1], and logistic regression [8]. All these methods require that the learning model or algorithm is tweaked, and the first two methods are specific to their respective classifiers.…”
Section: Related Workmentioning
confidence: 99%