2022
DOI: 10.1007/978-3-031-14463-9_4
|View full text |Cite
|
Sign up to set email alerts
|

Effects of Fairness and Explanation on Trust in Ethical AI

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 35 publications
0
3
0
Order By: Relevance
“…For example, the training data used for developing large language models often contain biases, and research has found that ChatGPT replicates gender biases in reference letters written for hypothetical employees ( Wan et al, 2023 ). Such disparities underscore the importance of aligning AI with human values, as perceived fairness significantly influences users’ trust in AI technologies ( Angerschmid et al, 2022 ).…”
Section: A Three-dimension Framework Of Trust In Aimentioning
confidence: 99%
See 1 more Smart Citation
“…For example, the training data used for developing large language models often contain biases, and research has found that ChatGPT replicates gender biases in reference letters written for hypothetical employees ( Wan et al, 2023 ). Such disparities underscore the importance of aligning AI with human values, as perceived fairness significantly influences users’ trust in AI technologies ( Angerschmid et al, 2022 ).…”
Section: A Three-dimension Framework Of Trust In Aimentioning
confidence: 99%
“…The concepts of transparency and explainability are deeply interconnected; explainability, in particular, plays a crucial role in reducing users’ perceived risks associated with AI systems ( Qin et al, 2020 ). Additionally, providing reasonable explanations after AI errors can restore people’s trust in AI ( Angerschmid et al, 2022 ).…”
Section: A Three-dimension Framework Of Trust In Aimentioning
confidence: 99%
“…In light of the ethical concern of deploying ML models in more real-world applications, the field of trustworthy ML has grown, which studies and pursues desirable qualities such as fairness, explainability, transparency, privacy and robustness (Varshney, 2019;Angerschmid et al, 2022). Both explainability and robustness may have the potential to promote reliability and trust, and can ensure that humans intelligence complements AI (Holzinger, 2021).…”
Section: Modulating and Manipulating Trustmentioning
confidence: 99%