2019 IEEE Conference on Visual Analytics Science and Technology (VAST) 2019
DOI: 10.1109/vast47406.2019.8986948
|View full text |Cite
|
Sign up to set email alerts
|

FAIRVIS: Visual Analytics for Discovering Intersectional Bias in Machine Learning

Abstract: African-American Male subgroup Detailed comparison of the groups Caucasian Male and African-American Male Figure 1: FAIRVIS integrates multiple coordinated views for discovering intersectional bias. Above, our user investigates the intersectional subgroups of sex and race. A. The Feature Distribution View allows users to visualize each feature's distribution and generate subgroups. B. The Subgroup Overview lets users select various fairness metrics to see the global average per metric and compare subgroups to … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
64
0
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 149 publications
(78 citation statements)
references
References 23 publications
0
64
0
2
Order By: Relevance
“…Using different textures to encode true/false positives/negatives, this tool allows fast and accurate estimation of performance metrics at multiple levels of detail. Recently, the issue of model fairness has drawn growing attention [80,83,97]. For example, Ahn et al [80] proposed a framework named FairSight and implemented a visual analytics system to support the analysis of fairness in ranking problems.…”
Section: Analyzing Training Resultsmentioning
confidence: 99%
“…Using different textures to encode true/false positives/negatives, this tool allows fast and accurate estimation of performance metrics at multiple levels of detail. Recently, the issue of model fairness has drawn growing attention [80,83,97]. For example, Ahn et al [80] proposed a framework named FairSight and implemented a visual analytics system to support the analysis of fairness in ranking problems.…”
Section: Analyzing Training Resultsmentioning
confidence: 99%
“…Parameterization allows users to designate variables in code that they can then change at runtime through sliders and other widgets. However, parameterization in the notebook today typically only changes the runtime value of a variable and does not affect the value of the variable in written code 3 . This means that all GUI-based tuning only lasts for the current runtime session and is lost between sessions [10].…”
Section: Prior Work and Design Constraintsmentioning
confidence: 99%
“…Since most participants dealt with developing machine learning (ML) models as part of their work, participants were particularly enthusiastic about having interactive tools to work with to help them understand model performance. Even with the availability of tests and metrics, understanding model performance is a challenging exploratory task for practitioners, especially for ML novices [19,29,3]. Interactive tools can help users better understand model performance [18,23], and a benefit of mage is that client tools can take in a user's model data and context, directly within their active code environment, and provide in-the-moment interactive support in situ without switching to an outside GUI tool.…”
Section: Interact To Explore Model Performancementioning
confidence: 99%
“…With regards to computer science research, just recently scholars started incorporating the concept of intersectionality in their work on algorithmic fairness [7,13]. Intersectional discrimination has been investigated in the context of automated facial analysis [6], expectation constraints [12], classification problems [30], and many other fields of artificial intelligence [19].…”
Section: Background: Intersectionalitymentioning
confidence: 99%
“…Indeed, with the increasing automation of decision processes in all aspects of human life, avoiding unfair and unacceptable disadvantage for specific individuals and groups has turned into an urgent political objective, as well as a technical challenge. Recently, intersectionality theory [7,10,13] has enriched the debate on algorithmic fairness by showing how often discrimination affects people who lay at the intersection of several protected attributes. This finding should in turn lead to more effective actions in this emerging research field.…”
Section: Introductionmentioning
confidence: 99%