2017
DOI: 10.48550/arxiv.1702.07826
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations

Abstract: We introduce AI rationalization, an approach for generating explanations of autonomous system behavior as if a human had performed the behavior. We describe a rationalization technique that uses neural machine translation to translate internal state-action representations of an autonomous agent into natural language. We evaluate our technique in the Frogger game environment, training an autonomous game playing agent to rationalize its action choices using natural language. A natural language training corpus is… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 4 publications
0
4
0
Order By: Relevance
“…Codella et al [9] demonstrated how explanations could improve model accuracy on three tasks, but required that every sample be hand-labeled with an explanation and treated explanations from different authors as equivalent. Ehsan et al [17] made use of explainable AI for automated game playing. Their approach relies on a secondary machine learning interpretation of the original behavior, rather than visualizing or explaining the original model as our approach does.…”
Section: Changes To Morai Makermentioning
confidence: 99%
“…Codella et al [9] demonstrated how explanations could improve model accuracy on three tasks, but required that every sample be hand-labeled with an explanation and treated explanations from different authors as equivalent. Ehsan et al [17] made use of explainable AI for automated game playing. Their approach relies on a secondary machine learning interpretation of the original behavior, rather than visualizing or explaining the original model as our approach does.…”
Section: Changes To Morai Makermentioning
confidence: 99%
“…While feature-based explanations are the most common approach, other forms have been proposed, including: similarity to learned "prototypes" which represent clusters of items from the training data (Li et al, 2017); high-precision feature interaction rules (Ribeiro et al, 2018); reference to predefined human-friendly concepts ; and generated natural language (Ehsan et al, 2017). Likewise, many evaluation criteria have been proposed.…”
Section: Interpretable Machine Learningmentioning
confidence: 99%
“…XAI aims to establish transparency by explaining what decisions led to the creation of the algorithm in addition to its inputs and outcomes, which provide the basis for trusting the algorithm [ 21 ]. Methods through which XAI can be established include layer-wise relevance propagation [ 22 ] and rationalization [ 23 ]. Furthermore, Friedrich et al [ 24 ] discussed the role and benefits of statistics, which they see as a natural partner in AI developments, for example, in calculating the sample size but also for bias control.…”
Section: Introductionmentioning
confidence: 99%