2022
DOI: 10.48550/arxiv.2204.13796
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Instilling Type Knowledge in Language Models via Multi-Task QA

Abstract: Understanding human language often necessitates understanding entities and their place in a taxonomy of knowledge-their types. Previous methods to learn entity types rely on training classifiers on datasets with coarse, noisy, and incomplete labels. We introduce a method to instill fine-grained type knowledge in language models with text-to-text pre-training on type-centric questions leveraging knowledge base documents and knowledge graphs. We create the WikiWiki dataset: entities and passages from 10M Wikiped… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 25 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?