2022
DOI: 10.1609/aaai.v36i10.21425
|View full text |Cite
|
Sign up to set email alerts
|

DKPLM: Decomposable Knowledge-Enhanced Pre-trained Language Model for Natural Language Understanding

Abstract: Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities.Experiments show that our model outperforms other KEPLMs significantly over zero-shot knowledge probing tasks and multiple knowledge-aware language understanding tasks. To guarantee effective knowledge injection, previous studies integrate models with knowledge encoders for representing knowledge retrieved from knowledge graphs. The ope… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2
2

Relationship

1
9

Authors

Journals

citations
Cited by 19 publications
(5 citation statements)
references
References 35 publications
0
5
0
Order By: Relevance
“…Knowledge-enhanced Pre-training. Conventional pre-training methods lack factual knowledge (Zhang et al, 2022b;. To deal with this issue, we present KP-PLM with a novel knowledge prompting paradigm for knowledge-enhanced pre-training.…”
Section: Core Capacitiesmentioning
confidence: 99%
“…Knowledge-enhanced Pre-training. Conventional pre-training methods lack factual knowledge (Zhang et al, 2022b;. To deal with this issue, we present KP-PLM with a novel knowledge prompting paradigm for knowledge-enhanced pre-training.…”
Section: Core Capacitiesmentioning
confidence: 99%
“…Knowledge-based PLM (KEPLM) improved the semantic understanding capability of the model by incorporating the relational triples in the knowledge graph into the language model pre-training process. DKPLM [17] had found a suitable pre-training task for injecting the knowledge graph. This can help the model to improve its ability to learn and represent knowledge without introducing additional parameters and models.…”
Section: Plmmentioning
confidence: 99%
“…There are substantial works to tackle these subtasks using PLMs as the cornerstone and with success [28][29][30][31][32]. Moreover, several works utilize structured knowledge to enhance PLMs for more than just self-supervised training on the large-scale corpus, with significant improvements in mention detection [33] and relation detection [34][35][36][37] tasks. Nevertheless , there is lack of work on making a comprehensive comparison on each subtasks from the perspective of PLMs.…”
Section: Introductionmentioning
confidence: 99%