Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations 2018
DOI: 10.18653/v1/d18-2024
|View full text |Cite
|
Sign up to set email alerts
|

OpenKE: An Open Toolkit for Knowledge Embedding

Abstract: We release an open toolkit for knowledge embedding (OpenKE), which provides a unified framework and various fundamental models to embed knowledge graphs into a continuous low-dimensional space. OpenKE prioritizes operational efficiency to support quick model validation and large-scale knowledge representation learning. Meanwhile, OpenKE maintains sufficient modularity and extensibility to easily incorporate new models into the framework. Besides the toolkit, the embeddings of some existing large-scale knowledg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
161
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 279 publications
(161 citation statements)
references
References 12 publications
0
161
0
Order By: Relevance
“…Three bi-vector Trans models named TransE-SYM, TransH-SYM and TransD-SYM proposed by us. Experimental code implementation reference open source project OpenKE [15]. These models run on datasets completed symmetric relation and get good results.…”
Section: Results Of Experimentsmentioning
confidence: 99%
“…Three bi-vector Trans models named TransE-SYM, TransH-SYM and TransD-SYM proposed by us. Experimental code implementation reference open source project OpenKE [15]. These models run on datasets completed symmetric relation and get good results.…”
Section: Results Of Experimentsmentioning
confidence: 99%
“…We implemented TransE [16], one of the most popular techniques of this kind, on top of Trident and compared the runtime of training vs. the one produced by OpenKE [35], a state-of-the-art library. Table 6 reports the runtime to train a model using as input a subset of YAGO which was used in other works [69].…”
Section: Sparqlmentioning
confidence: 99%
“…We obtain the highest Hits@10 scores on the validation set when learning rate at 5e −5 , the number of filters at 400 on WN18; and learning rate at 1e −5 , the number of filters at 50 on FB15k; and the learning rate at 5e −6 , the number of filters at 400 on WN18RR; and the learning rate at 1e −5 , the number of filters at 200 on FB15k-237. For comparison methods, we use the codes released by [11], [7] and [22]. Table 2.…”
Section: Implementation Detailsmentioning
confidence: 99%