2019
DOI: 10.1007/s10458-019-09408-y
|View full text |Cite
|
Sign up to set email alerts
|

Explainability in human–agent systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
123
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 172 publications
(127 citation statements)
references
References 84 publications
2
123
0
2
Order By: Relevance
“…Because we aimed to consider a variety of machine learning methods, we preferred using information gain and correlation-based feature selection, which are filter methods and can be used without any connection to a specific machine learning prediction model, 24 thus facilitating improved medical understanding. 35 We also considered correlations between features, which often exist in medical datasets. We used the multivariable correlation-based feature selection algorithm to do this.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Because we aimed to consider a variety of machine learning methods, we preferred using information gain and correlation-based feature selection, which are filter methods and can be used without any connection to a specific machine learning prediction model, 24 thus facilitating improved medical understanding. 35 We also considered correlations between features, which often exist in medical datasets. We used the multivariable correlation-based feature selection algorithm to do this.…”
Section: Discussionmentioning
confidence: 99%
“…32 We also considered using deep neural networks, but given the lack of dimensionality of our data, these models are substantially less accurate and interpretable. [33][34][35] Although we considered several options for building a supervised prediction model, unless otherwise specified, we present the results from a logistic regression prediction model.…”
Section: Data Handling and Machine Learning Approachesmentioning
confidence: 99%
“…End-users need explanations for various reasons such as the verification of the output, learning from the system and improving its future operation [71]. Recent studies and surveys in this field explored the user experience facets of explanations such as the explanation goals, content and the different forms of presenting and communicating these explanations including natural language and charts [69,64,1]. However, an understanding of the existing research on the delivery methods and modalities is becoming also needed.…”
Section: Introductionmentioning
confidence: 99%
“…Sharing goals drives the actor's behavior to be explainable (interpretable) with understandability [13], explicability [14], legibility [15], predictability [16], and transparency [17], while hiding goals drives the actor's behavior to be uninterpretable with obfuscation, deception, privacy, and security [18]. In the cooperative setting, the explainability and interpretability of behavior remain significantly challenging in developing human-aware AI agents [18] and human-agent systems [19]. Goal signaling assists behavior explanation in human-robot teaming [20].…”
Section: Introductionmentioning
confidence: 99%