2022
DOI: 10.1109/tse.2021.3101478
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Fairness Testing of Neural Classifiers Through Adversarial Sampling

Abstract: Although deep learning has demonstrated astonishing performance in many applications, there are still concerns about its dependability. One desirable property of deep learning applications with societal impact is fairness (i.e., non-discrimination). Unfortunately, discrimination might be intrinsically embedded into the models due to the discrimination in the training data. As a countermeasure, fairness testing systemically identifies discriminatory samples, which can be used to retrain the model and improve th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 17 publications
(15 citation statements)
references
References 47 publications
(117 reference statements)
0
15
0
Order By: Relevance
“…The first question is how to realize the domain transformation function 𝑇 𝐴→𝐡 . Note that this is straightforward for structured or text data which can be done by replacing the protected feature or token with a value from a predefined domain [55,56]. However, for image data, the sensitive attribute of interest is hidden from the input feature space.…”
Section: Domain Transformationmentioning
confidence: 99%
See 4 more Smart Citations
“…The first question is how to realize the domain transformation function 𝑇 𝐴→𝐡 . Note that this is straightforward for structured or text data which can be done by replacing the protected feature or token with a value from a predefined domain [55,56]. However, for image data, the sensitive attribute of interest is hidden from the input feature space.…”
Section: Domain Transformationmentioning
confidence: 99%
“…The above metrics enables us to evaluate a model's fairness adequacy. The follow-up questions are 1) how to generate diverse test cases to improve the fairness adequacy, and then 2) how to select the most valuable test cases to enhance the model's fairness by augmented training, which has been proved to be useful in enhancing the robustness or fairness of DNN [25,55,56].…”
Section: Fairness Enhancementmentioning
confidence: 99%
See 3 more Smart Citations