2022
DOI: 10.48550/arxiv.2206.14268
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

BertNet: Harvesting Knowledge Graphs from Pretrained Language Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…PLMs have been used to extract KGs. Hao et al (2022) use BERT-like models to extract knowledge of arbitrary new relation types and entities, without being restricted by preexisting knowledge or corpora, evaluating the outcomes using human effort. Wang et al (Wang et al, 2020) proceed similarly but compare the quality with existing KGs, which is an approach closer to the experiment reported here.…”
Section: Large Language Models and Knowledge Graphsmentioning
confidence: 99%
“…PLMs have been used to extract KGs. Hao et al (2022) use BERT-like models to extract knowledge of arbitrary new relation types and entities, without being restricted by preexisting knowledge or corpora, evaluating the outcomes using human effort. Wang et al (Wang et al, 2020) proceed similarly but compare the quality with existing KGs, which is an approach closer to the experiment reported here.…”
Section: Large Language Models and Knowledge Graphsmentioning
confidence: 99%
“…Many approaches have been proposed to achieve this link prediction task, some focused on observable features such as Rule Mining [17] [16][38] [24] or the Path Ranking Algorithm [31] [32], and others focused on capturing latent features of the graph by using different embedding techniques. In our paper, we are mainly focusing on the KG embedding approaches.…”
Section: Knowledge Graph Embedding and Link Predictionmentioning
confidence: 99%
“…BertNet [24] tried to address this issue along with the dependency of having existing massive data to learn from. They proposed to apply a paraphrasing stage before extracting the triplets, that way, there would be a more diverse set of alternatives to generate the entities and the triplets from.…”
Section: Knowledge Graph Constructionmentioning
confidence: 99%
“…Prior work showed that prompts can be automatically optimized to produce factually correct claims more robustly (Lester et al, 2021;Zhong et al, 2021;Qin and Eisner, 2021). Hao et al (2022) utilized multiple generated paraphrases to gauge consistency (Hao et al, 2022), and other works (Elazar et al, 2021; further proposed training objectives to improve model consistency. Another approach to handling multiple outputs is via variants of decoding strategies , or model ensembles (Sun et al, 2022).…”
Section: Model Calibrationmentioning
confidence: 99%