2023 IEEE/ACM 45th International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER) 2023
DOI: 10.1109/icse-nier58687.2023.00012
|View full text |Cite
|
Sign up to set email alerts
|

MLTEing Models: Negotiating, Evaluating, and Documenting Model and System Qualities

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…It will not be news to most readers that translating high-level and generalised principles of fairness, bias, or transparency into an actionable and, by extension, auditable obligation can prove exceptionally challenging [26,30,51,119]. Similarly, negotiating benchmarks for quality assessment internally can itself prove a laborious and demanding process as siloed teams can pose significant communication problems and inefficiencies [66,82,89,116]. Unsurprisingly, there has been a plethora of approaches and toolkits for assessing algorithmic systems, with varied yardsticks for measurement and evaluation depending on the nature of the algorithmic system in question [21,40,73,86,91,124,129].…”
Section: Whose Benchmarks? Whose Methods?mentioning
confidence: 99%
“…It will not be news to most readers that translating high-level and generalised principles of fairness, bias, or transparency into an actionable and, by extension, auditable obligation can prove exceptionally challenging [26,30,51,119]. Similarly, negotiating benchmarks for quality assessment internally can itself prove a laborious and demanding process as siloed teams can pose significant communication problems and inefficiencies [66,82,89,116]. Unsurprisingly, there has been a plethora of approaches and toolkits for assessing algorithmic systems, with varied yardsticks for measurement and evaluation depending on the nature of the algorithmic system in question [21,40,73,86,91,124,129].…”
Section: Whose Benchmarks? Whose Methods?mentioning
confidence: 99%