2020
DOI: 10.1002/cpe.6129
|View full text |Cite
|
Sign up to set email alerts
|

A framework for fostering transparency in shared artificial intelligence models by increasing visibility of contributions

Abstract: Increased adoption of artificial intelligence (AI) systems into scientific workflows will result in an increasing technical debt as the distance between the data scientists and engineers who develop AI system components and scientists, researchers and other users grows. This could quickly become problematic, particularly where guidance or regulations change and once-acceptable best practice becomes outdated, or where data sources are later discredited as biased or inaccurate. This paper presents a novel method… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 47 publications
0
10
0
Order By: Relevance
“…One key thrust for transparency work centers on the inputs required to produce an AI system. These studies focus on the intent behind [Gebru et al, 2021], composition of [Weissgerber et al, 2016], or use limitations for [Bertino et al, 2019a] datasets as well as address the conflicts that can arise between transparency and user concerns about data privacy and security [Jordan et al, 2020;Beam et al, 2020]. Research toward these factors often explores ways to strike a balance between increasing overall transparency to reap the attendant benefits regarding fairness, accountability, and trust, while mitigating the potential losses vis-a-vis privacy and security.…”
Section: Data-related Transparency Factorsmentioning
confidence: 99%
See 1 more Smart Citation
“…One key thrust for transparency work centers on the inputs required to produce an AI system. These studies focus on the intent behind [Gebru et al, 2021], composition of [Weissgerber et al, 2016], or use limitations for [Bertino et al, 2019a] datasets as well as address the conflicts that can arise between transparency and user concerns about data privacy and security [Jordan et al, 2020;Beam et al, 2020]. Research toward these factors often explores ways to strike a balance between increasing overall transparency to reap the attendant benefits regarding fairness, accountability, and trust, while mitigating the potential losses vis-a-vis privacy and security.…”
Section: Data-related Transparency Factorsmentioning
confidence: 99%
“…We consider here any work toward elucidating the functionality and quality of AI systems-including methods directed at both practitioners and users. Practitioners often need to debug models or reproduce results more easily [Beam et al, 2020]. On the other hand, users tend to simply desire a basic overview of a system's function for confidence in its functionality [Mei et al, 2022a] ( §3.1).…”
Section: System-centered Transparency Factorsmentioning
confidence: 99%
“…• Results from any model updating [13] [ 11] Such as min, max, and median values at the top-10 and over-all [12] Such as recalibration, recalibration, predictor effects adjusted, or new predictors added [13] i.e., model specification, model performance 7. Model evaluation [15,42,44,49,64] • Evaluation Dataset/s [42,44,67] -Test and holdout data transparency information [44] -Dataset size information: Sample size [44,63,64], Rationale for the sample size [63] -Preprocessing techniques used [42] • Comparison between validation and development datasets [63,64] • Methods used to evaluate model performance, e.g., cross validation [64] • Performance measures results [15,42,43,44,49,63,64,69] • Rationale for performance measures [44] • Benchmarking against standard datasets [44,49] • Reliability analysis, e.g., baseline survival [64] • FAIR [57] • Third party performance verifications [44] • Concept drift [44] • Interpretation of results [63,64] -If objectives are met considering the results…”
Section: Information About Model Parametersmentioning
confidence: 99%
“…Barclay et al 7 address a variety of new problems related to the use of AI methods in scientific workflows. Investigated problems include guidance, regulations change, outdated best‐practices, discredited as biased or inaccurate data sources and similar.…”
Section: Artificial Intelligence and Machine Learningmentioning
confidence: 99%