Proceedings of the 2019 Conference of the North 2019
DOI: 10.18653/v1/n19-1087
|View full text |Cite
|
Sign up to set email alerts
|

Description-Based Zero-shot Fine-Grained Entity Typing

Abstract: Fine-grained Entity typing (FGET) is the task of assigning a fine-grained type from a hierarchy to entity mentions in the text. As the taxonomy of types evolves continuously, it is desirable for an entity typing system to be able to recognize novel types without additional training. This work proposes a zero-shot entity typing approach that utilizes the type description available from Wikipedia to build a distributed semantic representation of the types. During training, our system learns to align the entity m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
47
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 39 publications
(47 citation statements)
references
References 28 publications
0
47
0
Order By: Relevance
“…To enable the zero-shot paradigm, previous researches (Ma et al, 2016;Obeidat et al, 2019) introduce a score function f (•) to rate the match of a given entity mention x and an entity type y, where y is the raw type picked from Y seen or Y unseen . The definition for f (•) is:…”
Section: Memory Augmented Typing Functionmentioning
confidence: 99%
See 3 more Smart Citations
“…To enable the zero-shot paradigm, previous researches (Ma et al, 2016;Obeidat et al, 2019) introduce a score function f (•) to rate the match of a given entity mention x and an entity type y, where y is the raw type picked from Y seen or Y unseen . The definition for f (•) is:…”
Section: Memory Augmented Typing Functionmentioning
confidence: 99%
“…Another direction is to construct a shared space for linking the seen and unseen data. These models (Ma et al, 2016;Yuan and Downey, 2018;Obeidat et al, 2019) map the mention and label embedding into a shared latent space, then estimate the closeness score for each mention-label pair. Most of the existing zero-shot FNET methods limit the model's flexibility with considerable auxiliary resources or pre-prepared hand-crafted features.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…However, these models require massive amounts of labeled data for a new domain, hindering the rapid development of new tasks. To address the data-intensiveness problem, domain adaptation approaches (Bapna et al, 2017;Lee and Jha, 2019;Shah et al, 2019;Obeidat et al, 2019;Liu et al, 2020b;He et al, 2020c) have been successfully applied. In this paper, we focus on zero-shot cross-domain transfer learning which leverages knowledge learned in the source domains and adapts the models to the target domain without labeled training samples in the target domain.…”
Section: Introductionmentioning
confidence: 99%