2021
DOI: 10.48550/arxiv.2108.05410
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

User-friendly Comparison of Similarity Algorithms on Wikidata

Abstract: While the similarity between two concept words has been evaluated and studied for decades, much less attention has been devoted to algorithms that can compute the similarity of nodes in very large knowledge graphs, like Wikidata. To facilitate investigations and headto-head comparisons of similarity algorithms on Wikidata, we present a user-friendly interface that allows flexible computation of similarity between Qnodes in Wikidata. At present, the similarity interface supports four algorithms, based on: graph… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…Composite embeddings Considering that KG and LM embeddings may provide complementary insights for similarity [40], we create two composite embeddings. Composite-all combines all our embedding models: two translation models (TransE and ComplEx), two random-walk models (Deepwalk, S-Deepwalk), and two LMs (Abstract and Lexicalize).…”
Section: Embedding Combinationmentioning
confidence: 99%
See 3 more Smart Citations
“…Composite embeddings Considering that KG and LM embeddings may provide complementary insights for similarity [40], we create two composite embeddings. Composite-all combines all our embedding models: two translation models (TransE and ComplEx), two random-walk models (Deepwalk, S-Deepwalk), and two LMs (Abstract and Lexicalize).…”
Section: Embedding Combinationmentioning
confidence: 99%
“…We use scikit-learn for supervised learning. We use KGTK's similarity API [40] to obtain scores for the metrics Class, Jiang Conrath, and TopSim. 8…”
Section: Implementation Detailsmentioning
confidence: 99%
See 2 more Smart Citations
“…10 For Wikidata, we used Falcon 2.0 (Sakor et al, 2020) 11 for entity linking. We collected entity embeddings from KGTK (Ilievski et al, 2021) 12 (the text version); we also reported the results using RDF2vec (Ristoski et al, 2019) 13 (the sg_200_5_5_15_4_500 version) for comparison.…”
Section: Explicit Knowledge-enhanced Retrievalmentioning
confidence: 99%