2019
DOI: 10.1017/s135132491900024x
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing and interpreting neural networks for NLP: A report on the first BlackboxNLP workshop

Abstract: The EMNLP 2018 workshop BlackboxNLP was dedicated to resources and techniques specifically developed for analyzing and understanding the inner-workings and representations acquired by neural models of language. Approaches included: systematic manipulation of input to neural networks and investigating the impact on their performance, testing whether interpretable knowledge can be decoded from intermediate representations acquired by neural networks, proposing modifications to neural network architectures to mak… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 46 publications
(35 citation statements)
references
References 80 publications
0
35
0
Order By: Relevance
“…. ] Given the success of these models, and their complexity, there is a booming interest in the computational linguistic community in understanding what aspects of language they capture, and how (Alishahi et al 2019). Recently, Pater (2019) has argued for the integration of neural network models in linguistic research (also see the responses to his article).…”
Section: Discussionmentioning
confidence: 99%
“…. ] Given the success of these models, and their complexity, there is a booming interest in the computational linguistic community in understanding what aspects of language they capture, and how (Alishahi et al 2019). Recently, Pater (2019) has argued for the integration of neural network models in linguistic research (also see the responses to his article).…”
Section: Discussionmentioning
confidence: 99%
“…How to generally and objectively evaluate explanations, without resorting to ad-hoc evaluation procedures that are domain and task specific, is still active research (Alishahi et al, 2019;Belinkov and Glass, 2019).…”
Section: Previous Workmentioning
confidence: 99%
“…However, their inner workings are poorly understood; indeed, for this reason, they are often referred to as black-box systems (Psichogios and Ungar, 1992;Orphanos et al, 1999;Cauer et al, 2000). This lack of understanding, coupled with the rising adoption of neural NLP systems in both industry and academia, has fomented a rapidly growing literature devoted to "cracking open the black box," as it were (Alishahi et al, 2019 et al., 2019). One popular method for studying the linguistic content of neural networks is probing, which we define in this work as training a supervised classifier (known as a probe) on top of pretrained models' frozen representations (Alain and Bengio, 2017).…”
Section: Introductionmentioning
confidence: 99%