Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume 2021
DOI: 10.18653/v1/2021.eacl-main.153
|View full text |Cite
|
Sign up to set email alerts
|

Language Models as Knowledge Bases: On Entity Representations, Storage Capacity, and Paraphrased Queries

Abstract: Pretrained language models have been suggested as a possible alternative or complement to structured knowledge bases. However, this emerging LM-as-KB paradigm has so far only been considered in a very limited setting, which only allows handling 21k entities whose name is found in common LM vocabularies. Furthermore, a major benefit of this paradigm, i.e., querying the KB using natural language paraphrases, is underexplored. Here we formulate two basic requirements for treating LMs as KBs: (i) the ability to st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 51 publications
(28 citation statements)
references
References 43 publications
1
27
0
Order By: Relevance
“…On the other hand, there has been increasing interest in the use of large pretrained generative models as a source of knowledge. Heinzerling and Inui (2021) argue that large pretrained models can in fact serve as knowledge bases, while Davison et al (2019) argue that pretrained models can accurately assess the validity of knowledge mined from raw text. While pretrained models are also trained on raw texts, similar to knowledge bases and graphs, they draw from this knowledge probabilistically and thus are a distinct approach.…”
Section: External Knowledgementioning
confidence: 99%
“…On the other hand, there has been increasing interest in the use of large pretrained generative models as a source of knowledge. Heinzerling and Inui (2021) argue that large pretrained models can in fact serve as knowledge bases, while Davison et al (2019) argue that pretrained models can accurately assess the validity of knowledge mined from raw text. While pretrained models are also trained on raw texts, similar to knowledge bases and graphs, they draw from this knowledge probabilistically and thus are a distinct approach.…”
Section: External Knowledgementioning
confidence: 99%
“…Tangential emerging research areas that are relevant to our work are knowledge acquisition via pre-trained language models and prompt-engineering. Prior work (such as [11], [16] or [48]) uses the knowledge within pre-trained language models for QA, fact checking or truthful generation. Significant efforts have focused on building better prompts and a representative collection can be found in a recent survey [19].…”
Section: Related Workmentioning
confidence: 99%
“…Prior efforts have also applied template and probe-based methods (Bouraoui et al, 2020;Petroni et al, 2019;Jiang et al, 2020b;Heinzerling and Inui, 2020) to extract relational knowledge from large pretrained models; we draw upon these techniques in this work. However, these works focus on general domain knowledge extraction, rather than clinical tasks which pose unique privacy concerns.…”
Section: Related Workmentioning
confidence: 99%