Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.272
|View full text |Cite
|
Sign up to set email alerts
|

ZS-BERT: Towards Zero-Shot Relation Extraction with Attribute Representation Learning

Abstract: While relation extraction is an essential task in knowledge acquisition and representation, and new-generated relations are common in the real world, less effort is made to predict unseen relations that cannot be observed at the training stage. In this paper, we formulate the zeroshot relation extraction problem by incorporating the text description of seen and unseen relations. We propose a novel multi-task learning model, zero-shot BERT (ZS-BERT), to directly predict unseen relations without handcrafted attr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
63
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 48 publications
(64 citation statements)
references
References 23 publications
1
63
0
Order By: Relevance
“…We conduct several experiments with ablation studies on three public datasets: FewRel (Han et al, 2018), Wiki-ZSL (Sorokin and Gurevych, 2017;Chen and Li, 2021) and NYT (Riedel et al, 2010) to show that our proposed model outperforms other existing state-of-the-art models, and our proposed model is more robust compared with the other models in zero-shot learning tasks.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…We conduct several experiments with ablation studies on three public datasets: FewRel (Han et al, 2018), Wiki-ZSL (Sorokin and Gurevych, 2017;Chen and Li, 2021) and NYT (Riedel et al, 2010) to show that our proposed model outperforms other existing state-of-the-art models, and our proposed model is more robust compared with the other models in zero-shot learning tasks.…”
Section: Methodsmentioning
confidence: 99%
“…In particular, by listing questions that define the relation's slot values (Levy et al, 2017;Cetoli, 2020). To avoid relying on question-answering models, some studies formulate relation extraction as a text entailment task and utilize the accessibility of the relation descriptions (Obamuyide and Vlachos, 2018;Qin et al, 2020;Gong and Eldardiry, 2021;Chen and Li, 2021). However, these models only utilize class names semantic information, losing the connections between relations.…”
Section: Zero-shot Relation Classificationmentioning
confidence: 99%
See 3 more Smart Citations