2015
DOI: 10.3233/sw-140147
|View full text |Cite
|
Sign up to set email alerts
|

DBnary: Wiktionary as a Lemon-based multilingual lexical resource in RDF

Abstract: Contributive resources, such as Wikipedia, have proved to be valuable to Natural Language Processing or multilingual Information Retrieval applications. This work focusses on Wiktionary, the dictionary part of the resources sponsored by the Wikimedia foundation. In this article, we present our extraction of multilingual lexical data from Wiktionary data and to provide it to the community as a Multilingual Lexical Linked Open Data (MLLOD). This lexical resource is structured using the LEMON Model.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
69
0
2

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 60 publications
(71 citation statements)
references
References 5 publications
0
69
0
2
Order By: Relevance
“…CompiLIG The best Spanish-English performance on SNLI sentences was achieved by CompiLIG using features including: cross-lingual conceptual similarity using DBNary (Serasset, 2015), cross-language MultiVec word embeddings (Berard et al, 2016), and Brychcin and Svoboda (2016)'s improvements to Sultan et al (2015)'s method. (Nagoudi et al, 2017) Using only weighted word embeddings, LIM-LIG took second place on Arabic.…”
Section: Methodsmentioning
confidence: 99%
“…CompiLIG The best Spanish-English performance on SNLI sentences was achieved by CompiLIG using features including: cross-lingual conceptual similarity using DBNary (Serasset, 2015), cross-language MultiVec word embeddings (Berard et al, 2016), and Brychcin and Svoboda (2016)'s improvements to Sultan et al (2015)'s method. (Nagoudi et al, 2017) Using only weighted word embeddings, LIM-LIG took second place on Arabic.…”
Section: Methodsmentioning
confidence: 99%
“…A bag-of-words S from each sentence S is built, by filtering stop words and by using a function that returns for a given word all its possible translations. These translations are jointly given by a linked lexical resource, DBNary (Sérasset, 2015), and by cross-lingual word embeddings. More precisely, we use the top 10 closest words in the embeddings model and all the available translations from DBNary to build the bag-of-words of a word.…”
Section: Cross-language Conceptualmentioning
confidence: 99%
“…We reuse the idea of Pataki (2012) which, for each sentence, build a bag-ofwords by getting all the available translations of each word of the sentence. For that, we use a linked lexical resource called DBNary (Sérasset, 2015). The bag-of-words of a sentence is the merge of the bag-of-words of the words of the sentence.…”
Section: Cross-languagementioning
confidence: 99%
“…We use the Muhr et al (2010)'s implementation which consists in replacing each word of one text by its most likely translations in the language of the other text, leading to a bags-of-words. We use DBNary (Sérasset, 2015) to get the translations. The metric used to compare two texts is a monolingual matching based on strict intersection of bags-of-words.…”
Section: Mt-based Modelsmentioning
confidence: 99%