Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP 2019
DOI: 10.18653/v1/w19-4820
|View full text |Cite
|
Sign up to set email alerts
|

Blackbox Meets Blackbox: Representational Similarity & Stability Analysis of Neural Language Models and Brains

Abstract: In this paper, we define and apply representational stability analysis (ReStA), an intuitive way of analyzing neural language models. ReStA is a variant of the popular representational similarity analysis (RSA) in cognitive neuroscience. While RSA can be used to compare representations in models, model components, and human brains, ReStA compares instances of the same model, while systematically varying single model parameter.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
44
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
2
2
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 53 publications
(44 citation statements)
references
References 30 publications
0
44
0
Order By: Relevance
“…Voita et al (2019a) used a form of canonical correlation analysis (PW-CCA; Morcos et al, 2018) to study the layerwise evolution of representations, while Saphra and Lopez (2019) explored how these representations evolve during training. Abnar et al (2019) used Representational Similarity Analysis (RSA; Laakso and Cottrell, 2000;Kriegeskorte et al, 2008) to study the effect of context on encoder representations, while Chrupała and Alishahi (2019) correlated them with syntax.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Voita et al (2019a) used a form of canonical correlation analysis (PW-CCA; Morcos et al, 2018) to study the layerwise evolution of representations, while Saphra and Lopez (2019) explored how these representations evolve during training. Abnar et al (2019) used Representational Similarity Analysis (RSA; Laakso and Cottrell, 2000;Kriegeskorte et al, 2008) to study the effect of context on encoder representations, while Chrupała and Alishahi (2019) correlated them with syntax.…”
Section: Related Workmentioning
confidence: 99%
“…RSA is a technique for measuring the similarity between two different representation spaces for a given set of stimuli. Originally developed for neuroscience (Kriegeskorte et al, 2008), it has become increasingly used to analyze similarity between neural network activations (Abnar et al, 2019;Chrupała and Alishahi, 2019). The method works by using a common set of n examples, used to create two sets of representations.…”
Section: Representational Similarity Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Huth et al (2016) replicated and extended these results using distributed word representations, and Pereira et al (2018) extended these results to sentence stimuli. Wehbe et al (2014), Qian et al (2016, Jain and Huth (2018), and Abnar et al (2019) next introduced more complex word and sentence meaning representations, demonstrating that neural network language models could better account for brain activation by incorporating repre-Figure 1: Brain decoding methodology. We use human brain activations in response to sentences to predict how neural networks represent those same sentences.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, we use RSA to compare brain imaging data with computational models, and it is worth investigating applications of RSA beyond brain imaging data. For example, recent work has used RSA to compare representational spaces across computational language models and their individual components (Gauthier and Levy, 2019;Abnar et al, 2019;Chrupała and Alishahi, 2019). We also feel that the multi-arrangement method is underutilized in NLP for the shortcomings it addresses in existing semantic judgement acquisition techniques.…”
Section: Resultsmentioning
confidence: 99%