2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC) 2020
DOI: 10.1109/cic50333.2020.00030
|View full text |Cite
|
Sign up to set email alerts
|

What Information is Required for Explainable AI? : A Provenance-based Research Agenda and Future Challenges

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…It defines concepts and relationships to capture entities involved in a process, activities that took place, and their interconnections. The 'Six-Ws' framework proposed by Jaigirdar et al [51] provides another idea for the design of XAI provenance.…”
Section: Xai Data Provenancementioning
confidence: 99%
“…It defines concepts and relationships to capture entities involved in a process, activities that took place, and their interconnections. The 'Six-Ws' framework proposed by Jaigirdar et al [51] provides another idea for the design of XAI provenance.…”
Section: Xai Data Provenancementioning
confidence: 99%
“…To increase understanding, outputs should be visualised where possible, making them easier for users to interpret [169]. To assist in transparency, users should be provided with sufficient meta-information about the model (including addressing the questions 'who', 'which', 'what', 'why', 'when' and 'where'), an approach further detailed in the explainable AI framework provided by [146].…”
Section: Stakeholder Engagement and User-centered Designmentioning
confidence: 99%
“…At present, a key barrier and opportunity is that AI developers have only limited awareness of existing solutions that support ethical AI development, with organisational barriers persisting [186]. For example, developers may be unaware how to document their datasets [102] or to present their models in an explainable manner [146], despite these guidelines being available. As suggested by [186], these issues could be overcome by simple organisational mechanisms, rather than technical solutions.…”
Section: Improving Organisational Engagement Amongst Ai Developersmentioning
confidence: 99%
“…Moreover, as soon as attacks can have serious consequences to human life or create significant financial damage, it becomes a major concern that end-users are not able to perceive any potential risks or attacks [11], [12] and end-users are not able to estimate or interpret if the data they see is trustworthy [10]. Thus, users do not have suitable cyber-situational awareness [13], [14] to know whether cyber-attacks are possible or have even occurred while the data was propagating through IoT systems.…”
Section: Introductionmentioning
confidence: 99%