2022
DOI: 10.31234/osf.io/qhwvx
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

How Do We Assess the Trustworthiness of AI? Introducing the Trustworthiness Assessment Model (TrAM)

Abstract: Designing trustworthy algorithmic decision-making systems is a central goal in system design. Additionally, it is crucial that external parties can adequately assess the trustworthiness of systems. Ultimately, this should lead to calibrated trust: trustors adequately trust and distrust the system. But the process through which trustors assess actual trustworthiness of a system to end up at their perceived trustworthiness of a system remains underexplored. Transferring from psychological theory about interperso… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 70 publications
0
3
0
Order By: Relevance
“…Alternatively, the user's lack of trust might be a case of decalibrated trust [37], i.e., the incorrect assessment of a trustworthy system as non-trustworthy. Again, this could be explained by several factors [55], and the most likely one in our case is inadequate access to relevant cues and information by the supervisor. In other words, our system needs to allow supervisors to appropriately develop trust in the system by providing more information.…”
Section: Stakeholdersmentioning
confidence: 74%
See 2 more Smart Citations
“…Alternatively, the user's lack of trust might be a case of decalibrated trust [37], i.e., the incorrect assessment of a trustworthy system as non-trustworthy. Again, this could be explained by several factors [55], and the most likely one in our case is inadequate access to relevant cues and information by the supervisor. In other words, our system needs to allow supervisors to appropriately develop trust in the system by providing more information.…”
Section: Stakeholdersmentioning
confidence: 74%
“…From an applicationoriented perspective, the usefulness of XAI methods hinges on the particular use context and in order to evaluate the suitability of XAI methods, one must understand the requirements, expectations, and needs of the relevant stakeholders that they are meant to fulfill [17,35,36,51,58,61] Most considerations regarding XAI requirements take either a broad societal perspective [15,19,56] or the perspective of specific stakeholders [21,29,38] , especially in high-stake fields [20,48,50]. Typical examples of concrete expectations connected to XAI are the improvement of unfairness detection [7,14,30], the resolution of accountability questions, the filling of responsibility gaps [6,44,57], and the improvement of trustworthiness assessments [32,42,43,55].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation