2020
DOI: 10.3389/frai.2020.00026
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI in Fintech Risk Management

Abstract: The paper proposes an explainable AI model that can be used in fintech risk management and, in particular, in measuring the risks that arise when credit is borrowed employing peer to peer lending platforms. The model employs Shapley values, so that AI predictions are interpreted according to the underlying explanatory variables. The empirical analysis of 15,000 small and medium companies asking for peer to peer lending credit reveals that both risky and not risky borrowers can be grouped according to a set of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
78
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 119 publications
(80 citation statements)
references
References 7 publications
1
78
0
1
Order By: Relevance
“…This idea has recently led some scholars to promote XAI methods aimed at making both the financial technology risk measurement models interpretable and transparent, and the risks of financial innovations, enabled by the application of AI, sustainable (see, e.g. Bracke, Datta, Jung, & Shayak, 2019;Bussmann et al, 2020). In particular, in Bussmann et al (2020) an explainable AI model based on similarity networks (Mantegna & Stanley, 1999) and Shapley values is proposed to measure the credit risks associated to the use of AI based credit scoring platforms.…”
Section: Applicationmentioning
confidence: 99%
See 3 more Smart Citations
“…This idea has recently led some scholars to promote XAI methods aimed at making both the financial technology risk measurement models interpretable and transparent, and the risks of financial innovations, enabled by the application of AI, sustainable (see, e.g. Bracke, Datta, Jung, & Shayak, 2019;Bussmann et al, 2020). In particular, in Bussmann et al (2020) an explainable AI model based on similarity networks (Mantegna & Stanley, 1999) and Shapley values is proposed to measure the credit risks associated to the use of AI based credit scoring platforms.…”
Section: Applicationmentioning
confidence: 99%
“…Bracke, Datta, Jung, & Shayak, 2019;Bussmann et al, 2020). In particular, in Bussmann et al (2020) an explainable AI model based on similarity networks (Mantegna & Stanley, 1999) and Shapley values is proposed to measure the credit risks associated to the use of AI based credit scoring platforms.…”
Section: Applicationmentioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, this obscurity could result in ethical issues where black box applications risk offending race or gender of the users. To date, the only legal directive affecting the use of AI is the European General Data Protection Regulation (GDPR) whose interpretation regarding the obligation of logical explanation in automated processing, is currently being debated between AI experts and practitioners (Hacker et al, 2020;Chazette& Schneider, 2020;Bussmann et al, 2020). In term of ethical guideline, the European Commission's High-Level Expert Group on AI presented the key requirements in Ethics Guidelines for Trustworthy Artificial Intelligence in 2019, whose key requirements correspond directly or indirectly on the use of XAI (Bussmann et al, 2020).…”
Section: Black-box Aimentioning
confidence: 99%