Proceedings of the 2018 International Conference on Management of Data 2018
DOI: 10.1145/3183713.3193568
|View full text |Cite
|
Sign up to set email alerts
|

A Nutritional Label for Rankings

Abstract: Algorithmic decisions often result in scoring and ranking individuals to determine credit worthiness, qualifications for college admissions and employment, and compatibility as dating partners. While automatic and seemingly objective, ranking algorithms can discriminate against individuals and protected groups, and exhibit low diversity. Furthermore, ranked results are often unstablesmall changes in the input data or in the ranking methodology may lead to drastic changes in the output, making the result uninfo… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
50
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 86 publications
(50 citation statements)
references
References 9 publications
0
50
0
Order By: Relevance
“…• quickly grasp properties of a given method to choose an appropriate approach for a desired use case given the unified design of the Fact Sheets (as with food nutrition labels [56] the users know what to expect and how to navigate them).…”
Section: Target Audiencementioning
confidence: 99%
See 1 more Smart Citation
“…• quickly grasp properties of a given method to choose an appropriate approach for a desired use case given the unified design of the Fact Sheets (as with food nutrition labels [56] the users know what to expect and how to navigate them).…”
Section: Target Audiencementioning
confidence: 99%
“…All of these methods revolve around recording details about the data themselves, e.g., the units of features, the data collection process and their intended purpose. Other researchers argued for a similar approach for predictive models: "model cards for model reporting" [36], "nutrition labels for rankings" [56] and "algorithmic impact assessment" forms [42]. Finally, Arnold et al [1] suggested "fact sheets" for ML services to communicate their capabilities, constrains, biases and transparency.…”
Section: Related Workmentioning
confidence: 99%
“…The data management research community is well-positioned to contribute to developing new methods for interpretability. These new contributions can naturally build on a rich body of work on data provenance (see Herschel et al (2017) for a recent survey), on recent work on explaining classifiers (Ribeiro et al 2016) and auditing black box models using causal framework (Datta et al 2016), and on automatically generating "nutritional labels" for data and models (Yang et al 2018). We can all agree that algorithmic decision-making should be fair, even if we do not agree on the definition of fairness.…”
Section: Algorithmic and Data Transparencymentioning
confidence: 99%
“…Green & Chen [25] demonstrate that human decision-making interacts with algorithmic processing in the ultimate disparate impacts of algorithmic systems. Other scholars have pursued empirical case studies [19,50,52], field evaluations [10,65], context-aware uses of datasets [46,63], and questions of institutional access to real-world data [67]. In each case, attending to end-use provides a richer design space and new opportunities for improvements and safeguards.…”
Section: Introductionmentioning
confidence: 99%