Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.140
|View full text |Cite
|
Sign up to set email alerts
|

Probing Linguistic Features of Sentence-Level Representations in Relation Extraction

Abstract: Despite the recent progress, little is known about the features captured by state-of-the-art neural relation extraction (RE) models. Common methods encode the source sentence, conditioned on the entity mentions, before classifying the relation. However, the complexity of the task makes it difficult to understand how encoder architecture and supporting linguistic knowledge affect the features learned by the encoder. We introduce 14 probing tasks targeting linguistic properties relevant to RE, and we use them to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 30 publications
(22 citation statements)
references
References 23 publications
0
22
0
Order By: Relevance
“…4 To Probe or Not to Probe? By using the probing technique, different linguistic phenomenon such as POS, dependency information, and NER (Tenney et al, 2019a;Liu et al, 2019a;Alt et al, 2020) have been found to be ''easily extractable'' (typically using linear probes). A naive interpretation of these results may conclude that because information can be easily extracted by the probing model, this information is being used for the predictions.…”
Section: Metricsmentioning
confidence: 99%
“…4 To Probe or Not to Probe? By using the probing technique, different linguistic phenomenon such as POS, dependency information, and NER (Tenney et al, 2019a;Liu et al, 2019a;Alt et al, 2020) have been found to be ''easily extractable'' (typically using linear probes). A naive interpretation of these results may conclude that because information can be easily extracted by the probing model, this information is being used for the predictions.…”
Section: Metricsmentioning
confidence: 99%
“…The main neural networks used are the Convolutional Neural Network (CNN) [5][6][7] and Recurrent Neural Network (RNN) [8][9][10]. Moreover, according to whether the extracted entity pairs span sentences, the relation extraction model can be divided into a sentence-level relation extraction model [11] and cross-sentence relation extraction model [12]. There are also some researchers that perform document-level relation extraction [13,14].…”
Section: Related Workmentioning
confidence: 99%
“…Diagnostic probes were originally intended to explain information encoded in intermediate representations (Adi et al, 2017;Alain and Bengio, 2017;. Recently, various probing tasks have queried the representations of, e.g., contextualized word embeddings (Tenney et al, 2019a,b) and sentence embeddings (Linzen et al, 2016;Chen et al, 2019;Alt et al, 2020;Kassner and Schütze, 2020;Chi et al, 2020).…”
Section: Related Workmentioning
confidence: 99%