2018
DOI: 10.2991/ijcis.11.1.28
|View full text |Cite
|
Sign up to set email alerts
|

Learning Turkish Hypernymy Using Word Embeddings

Abstract: Recently, Neural Network Language Models have been effectively applied to many types of Natural Language Processing (NLP) tasks. One popular type of tasks is the discovery of semantic and syntactic regularities that support the researchers in building a lexicon. Word embedding representations are notably good at discovering such linguistic regularities. We argue that two supervised learning approaches based on word embeddings can be successfully applied to the hypernym problem, namely, utilizing embedding offs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 29 publications
0
6
0
Order By: Relevance
“…TTC-3600 and TTC-4900 datasets were used as benchmarking datasets in Turkish text classification. 10,11,16,46,47 Although both datasets are balanced, TTC-3600 has six news categories having 600 news in each category, 3600 news in total, while the TTC-4900 dataset has 700 samples in 7 categories and 4900 news. The categories and the number of samples in each category are given in Table 2.…”
Section: Datasetsmentioning
confidence: 99%
“…TTC-3600 and TTC-4900 datasets were used as benchmarking datasets in Turkish text classification. 10,11,16,46,47 Although both datasets are balanced, TTC-3600 has six news categories having 600 news in each category, 3600 news in total, while the TTC-4900 dataset has 700 samples in 7 categories and 4900 news. The categories and the number of samples in each category are given in Table 2.…”
Section: Datasetsmentioning
confidence: 99%
“…One can say that, for simplicity, the linear model with the optimization of Adagrad or RMSProp, loss function of cosine proximity, and 15 epoch steps can learn a model for mapping a noun to its hypernym. The Table also indicates that our models get a 90.75 % success rate and outperform the other study [22] that used the same dataset and applied the same problem. When we take SGD as the baseline function, our results shows that the proposed model can get faster and better results in terms of accuracy.…”
Section: Resultsmentioning
confidence: 56%
“…In the benchmark dataset there are 735 noun hypernym pairs, which are taken from another study for the Turkish language [22]. Pairs such as (burdur → il) and (doktor → meslek) have been randomly selected for a Turkish corpus, which is a natural language resource for Turkish.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations