Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.142
|View full text |Cite
|
Sign up to set email alerts
|

A Two-phase Prototypical Network Model for Incremental Few-shot Relation Classification

Abstract: Relation Classification (RC) plays an important role in natural language processing (NLP). Current conventional supervised and distantly supervised RC models always make a closed-world assumption which ignores the emergence of novel relations in an open environment. To incrementally recognize the novel relations, current two solutions (i.e, re-training and lifelong learning) are designed but suffer from the lack of large-scale labeled data for novel relations. Meanwhile, prototypical network enjoys better perf… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
44
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 37 publications
(44 citation statements)
references
References 28 publications
0
44
0
Order By: Relevance
“…Relation network [19] adapted the convolutional neural network to extract the features of support and query samples, and the relation classification scores were obtained by concatenating the vectors of support and query samples into the relation network. To overcome the catastrophic forgetting problem, Cai et al [20] introduced a two-phase prototypical network, which adapted prototype attention alignment and triplet loss to dynamically recognize the novel relations with a few support instances without catastrophic forgetting. Similarly, Fan et al [21] proposed the large-margin prototypical network with fine-grained features (LM-ProtoNet), which could generalize well on few-shot relations classification.…”
Section: Few-shot Relation Classificationmentioning
confidence: 99%
“…Relation network [19] adapted the convolutional neural network to extract the features of support and query samples, and the relation classification scores were obtained by concatenating the vectors of support and query samples into the relation network. To overcome the catastrophic forgetting problem, Cai et al [20] introduced a two-phase prototypical network, which adapted prototype attention alignment and triplet loss to dynamically recognize the novel relations with a few support instances without catastrophic forgetting. Similarly, Fan et al [21] proposed the large-margin prototypical network with fine-grained features (LM-ProtoNet), which could generalize well on few-shot relations classification.…”
Section: Few-shot Relation Classificationmentioning
confidence: 99%
“…As Al-Shedivat et al (2021) recently showed it, such approaches are the most efficient when working with a low amount of training samples. Many variants have been proposed, on different tasks and topics such as relation classification in text (Gao et al, 2019;Hui et al, 2020;Ren et al, 2020), sentiment classification in Amazon comments (Bao et al, 2020), named entity recognition (Fritzler et al, 2019;Hou et al, 2020;Perl et al, 2020;Safranchik et al, 2020), or even speech classification in conversation (Koluguri et al, 2020). This surge of interest on applying few-shot learning to these topics can be attributed to specific datasets, such as Few-Rel (Han et al, 2018) for relation classification.…”
Section: Related Workmentioning
confidence: 99%
“…As Al-Shedivat et al (2021) recently showed it, such approaches are the most efficient when working with a low amount of training samples. Many variants have been proposed, on different tasks and topics such as relation classification in text (Gao et al, 2019;Hui et al, 2020;Ren et al, 2020), sentiment classification in Amazon comments (Bao et al, 2020), named entity recognition (Fritzler et al, 2019;Hou et al, 2020;Perl et al, 2020;Safranchik et al, 2020), or even speech classification in conversation (Koluguri et al, 2020). This surge of interest on applying few-shot learning to these topics can be attributed to specific datasets, such as Few-Rel (Han et al, 2018) for relation classification.…”
Section: Related Workmentioning
confidence: 99%