2021 IEEE 15th International Conference on Semantic Computing (ICSC) 2021
DOI: 10.1109/icsc50631.2021.00038
|View full text |Cite
|
Sign up to set email alerts
|

A shallow neural model for relation prediction

Abstract: Knowledge graph completion refers to predicting missing triples. Most approaches achieve this goal by predicting entities, given an entity and a relation. We predict missing triples via the relation prediction. To this end, we frame the relation prediction problem as a multi-label classification problem and propose a shallow neural model (SHALLOM) that accurately infers missing relations from entities. SHALLOM is analogous to C-BOW as both approaches predict a central token (p) given surrounding tokens ((s, o)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

3
4

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…To train Drill, we first generate 10 learning problems for each benchmark datasets described in Section 5.1. During our experiments, we used the pretrained embeddings of input knowledge graphs provided by [10,9]. We used ConEx embeddings of Family, Biopax, Mutagenesis and Shallom embeddings of Carcinogenesis.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To train Drill, we first generate 10 learning problems for each benchmark datasets described in Section 5.1. During our experiments, we used the pretrained embeddings of input knowledge graphs provided by [10,9]. We used ConEx embeddings of Family, Biopax, Mutagenesis and Shallom embeddings of Carcinogenesis.…”
Section: Methodsmentioning
confidence: 99%
“…We used ConEx embeddings of Family, Biopax, Mutagenesis and Shallom embeddings of Carcinogenesis. For more details about configurations of models, we refer project pages of [10,9]. For each generated learning problem, we trained Drill in an ǫgreedy fashion by using the the following configuration: ADAM optimizer with learning rate of .01, mini-batches of size 512, number of episodes set to 100, an epsilon decay of .01, a discounting factor γ of .99, 32 input channels, and (3 × 3)…”
Section: Methodsmentioning
confidence: 99%
“…Scholarly Publications: Our framework have been effectively used to learn knowledge graphs embeddings in several published works [3,4,[13][14][15][16], and [17].…”
Section: Software Impactmentioning
confidence: 99%
“…Scholarly Publications: Our framework have been effectively used to learn knowledge graphs embeddings in several published works [13], [3], [14], [4], [15], [16], and [17].…”
Section: Software Impactmentioning
confidence: 99%