Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations 2020
DOI: 10.18653/v1/2020.emnlp-demos.15
|View full text |Cite
|
Sign up to set email alerts
|

The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models

Abstract: We present the Language Interpretability Tool (LIT), an open-source platform for visualization and understanding of NLP models. We focus on core questions about model behavior: Why did my model make this prediction? When does it perform poorly? What happens under a controlled change in the input? LIT integrates local explanations, aggregate analysis, and counterfactual generation into a streamlined, browser-based interface to enable rapid exploration and error analysis. We include case studies for a diverse se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
67
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 107 publications
(67 citation statements)
references
References 29 publications
0
67
0
Order By: Relevance
“…Enriching ExplainaBoard with Glass-box Analysis EXPLAINABOARD currently performs blackbox analysis, solely analyzing system outputs without accessing model internals. On the other hand, there are many other glass-box interpretability tools that look at model internals, such as the AllenNLP Interpret (Wallace et al, 2019) and Language Interpretability Tool (Tenney et al, 2020). Expanding leaderboards to glass-box analysis methods (see Lipton (2018); Belinkov and Glass (2019) for a survey) is an interesting future work.…”
Section: Implications and Roadmapmentioning
confidence: 99%
“…Enriching ExplainaBoard with Glass-box Analysis EXPLAINABOARD currently performs blackbox analysis, solely analyzing system outputs without accessing model internals. On the other hand, there are many other glass-box interpretability tools that look at model internals, such as the AllenNLP Interpret (Wallace et al, 2019) and Language Interpretability Tool (Tenney et al, 2020). Expanding leaderboards to glass-box analysis methods (see Lipton (2018); Belinkov and Glass (2019) for a survey) is an interesting future work.…”
Section: Implications and Roadmapmentioning
confidence: 99%
“…The Newsroom dataset visualization tool (Grusky et al, 2018) highlights n-grams in the summary that overlap with the source article. The LIT tool (Tenney et al, 2020) highlights words or characters that differ between reference and generated texts. However neither tool aligns (Yousef and Janicke, 2021) the matched text.…”
Section: Related Workmentioning
confidence: 99%
“…Existing visualization systems and techniques do not visually connect attention mechanisms to linguistic knowledge (Tenney et al, 2020;DeRose et al, 2021), we propose novel visualization approaches that foster exploration across semantically and syntactically significant attention heads in complex model architectures. For example, for every attention head in the 144 heads of BERT, the entry A i,j in the attention map A, represents the attention weight from token i to token j.…”
Section: Introductionmentioning
confidence: 99%