2020
DOI: 10.1007/978-3-030-51064-0_1
|View full text |Cite
|
Sign up to set email alerts
|

Trustworthy Human-Centered Automation Through Explainable AI and High-Fidelity Simulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…Computer scientists are focusing on developing algorithmic transparency and glass-box machine learning models (Abdul et al, 2018 ; Hayes and Moniz, 2021 ). On the other hand, researchers have been proposing design guidelines and considerations to build explainable interfaces and increase AI literacy among users (Amershi et al, 2019 ; Liao et al, 2020 ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Computer scientists are focusing on developing algorithmic transparency and glass-box machine learning models (Abdul et al, 2018 ; Hayes and Moniz, 2021 ). On the other hand, researchers have been proposing design guidelines and considerations to build explainable interfaces and increase AI literacy among users (Amershi et al, 2019 ; Liao et al, 2020 ).…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, studies have shown that digital assistants influence users' perceived control during an interaction, often resulting in disappointment with the online recommendation (André et al, 2018) and experience a psychological reactance when they perceive that their freedom is reduced (Lee and Lee, 2009). Multiple studies have also explored how AI impacts trust in machines and have provided approaches to repair trust in human-machine interactions (de Visser et al, 2018;Kulms and Kopp, 2019;Hayes and Moniz, 2021). Yet, there is a lack of a comprehensive look at autonomy perception or reactance encompassing various factors such as trust, control, freedom of choice, and decision-making support.…”
Section: Understanding Autonomy and Reactancementioning
confidence: 99%
“…The authors in [251] proposed guidelines for using XAI techniques and simulations using XR for secured humanrobot interactions. The authors suggested that the proliferation of high-fidelity VR-based simulation environments will result in the reduction of barriers in cataloging and performing postmortems in operations by robotics that may result in the characterization of more rigorous behavior of autonomous system behavior and promote the adaption of explainable techniques in their controllers.…”
Section: ) How Xai Can Helpmentioning
confidence: 99%
“…Similarly, if 6G is enabled with XAI, the virtual assistants can provide accurate information to the customers. For instance, [203] proposed guidelines for using XAI techniques and simulations using XR for secured human-robots interactions.…”
Section: • Virtual Assistants For Dynamic Customer Experiencesmentioning
confidence: 99%
“…[173] Proposed a SHAP-based backpropagation deep explainer method, that produces a interpretable model for emergency control applications in smart grids. [203] Proposed guidelines for using XAI techniques and simulations using XR for secured human-robots interactions. [233] Maximizing Explainability with SF-Lasso and Selective Inference for Video and Picture Ads [234] Attentive capsule network for click-through rate and conversion rate prediction in online advertising [212] Proposed an XAI solution using DL and Semantic Web technologies for flood monitoring [208] Proposed a SHAP-based method to interpret the outputs of a multilayer perceptron for buildingdamage assessment.…”
Section: H Summary Of the Xai Impact On 6g Applications And Technical...mentioning
confidence: 99%