Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1010
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Losses and New Resources for Fine-grained Entity Typing and Linking

Abstract: Extraction from raw text to a knowledge base of entities and fine-grained types is often cast as prediction into a flat set of entity and type labels, neglecting the rich hierarchies over types and entities contained in curated ontologies. Previous attempts to incorporate hierarchical structure have yielded little benefit and are restricted to shallow ontologies. This paper presents new methods using real and complex bilinear mappings for integrating hierarchical information, yielding substantial improvement o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
90
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 88 publications
(90 citation statements)
references
References 39 publications
(43 reference statements)
0
90
0
Order By: Relevance
“…We perform experiments on two benchmark Biocreative datasets as well as the recently introduced MedMentions data (Murty et al, 2018). Our experiments consider three types of models.…”
Section: Resultsmentioning
confidence: 99%
“…We perform experiments on two benchmark Biocreative datasets as well as the recently introduced MedMentions data (Murty et al, 2018). Our experiments consider three types of models.…”
Section: Resultsmentioning
confidence: 99%
“…for various values of K nearest neighbours. Our candidate generator provides a 5% absolute improvement over Murty et al (2018) despite generating 46% fewer candidates per mention on average.…”
Section: Candidate Generationmentioning
confidence: 98%
“…This reduced subset is comprised of sections 0, 1, 2 and 9 (SNOMED) of the UMLS 2017 AA release, which are publicly distributable. It contains 2.78M unique concepts and covers 99% of the mention concepts present in the MedMentions dataset (Murty et al, 2018).…”
Section: Candidate Generation For Entity Linkingmentioning
confidence: 99%
“…Entity linking models Entity linking given mention boundaries as input can be broken into the tasks of candidate generation and candidate ranking. When frequency information or alias tables are unavailable, prior work has used measures of similarity of the mention string to entity names for candidate generation (Sil et al, 2012;Murty et al, 2018). For candidate ranking, recent work employed distributed representations of mentions in context and entity candidates and neural models to score their compatibility.…”
Section: Related Workmentioning
confidence: 99%