2021
DOI: 10.1080/01605682.2021.1922098
|View full text |Cite
|
Sign up to set email alerts
|

Transparency, auditability, and explainability of machine learning models in credit scoring

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
29
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 88 publications
(32 citation statements)
references
References 34 publications
2
29
0
1
Order By: Relevance
“…Concerning machine learning, the results of the benchmark do not show a performance increase by using random forests instead of logistic regression and thus confirm the conclusions from Bücker et al. (2021) to carefully analyze the benefits of using more complex models and to prefer simple models such as the shrunk logistic regression model otherwise.…”
Section: Discussionsupporting
confidence: 64%
“…Concerning machine learning, the results of the benchmark do not show a performance increase by using random forests instead of logistic regression and thus confirm the conclusions from Bücker et al. (2021) to carefully analyze the benefits of using more complex models and to prefer simple models such as the shrunk logistic regression model otherwise.…”
Section: Discussionsupporting
confidence: 64%
“…Although in general the presented methodology can be applied to arbitrary machine learning models the changes in the data as induced by the fairness correction put even more emphasis on a deep understanding of the resulting model and corresponding methodology of interpretable machine learning to achieve this goal (cf. e.g., Bücker et al, 2021 for an overview in the credit risk scoring context). Further note that as it is demonstrated in Szepannek (2019) the obtained interpretations bear the risk to be misleading.…”
Section: Resultsmentioning
confidence: 99%
“…Regulatory requirements as given by the Basel Committee on Banking Supervision (BCBS) (European Banking Authority, 2017) or the EU data protection regulations (Goodman and Flaxman, 2017) have led to an increasing interest and research activity on understanding black box machine learning models by means of explainable machine learning (cf. e.g., Bücker et al, 2021). Even though this is a step into a right direction, such methods are not able to guarantee for a fair scoring as machine learning models are not necessarily unbiased and may discriminate with respect to certain subpopulations such as a particular race, gender, or sexual orientation-even if the variable itself is not used for modeling.…”
Section: Introductionmentioning
confidence: 99%
“…As the variability of AI in the medical imaging space is high, the documentation should be complete and detailed, in compliance with the best practices and the standards for software development regulated by certification organisations, as in the case of software as a medical device [1,40]. In other words, the data sets, the processes, the reference clinical gold standards, and the contributors that yield the AI system should be documented to the best possible standard to allow for traceability and an increase in transparency [19]. This entails to provide details about data gathering, with information about the clinical sites, the devices used, the acquisition protocols, dataset composition (see the Fairness principle 2), data labelling, also with respect to annotation contributors, used annotation tooling, the underlying reference standards (e.g., the version of PI-RADS or BI-RADS used by radiologists), as well as the development framework, and the algorithms used.…”
Section: Traceability -For Transparent and Dynamic Ai In Medical Imagingmentioning
confidence: 99%