DOI: 10.12681/eadd/29846
|View full text |Cite
|
Sign up to set email alerts
|

Network-based distributional semantic models

Abstract: στοχό την αναπαράσταση των σημασιολογικών γειτονιών πολυλεκτικών όρων αποτελούμενων από ουσιαστικά, καθώς και η εκτίμηση της σημασιολογικής ομοιότητας αυτών. Πολύ καλά αποτελέσματα επιτυγχάνονται για τις ανωτέρω εφαρμογές καταδεικνύοντας την προσαρμοστικότητα των προτεινόμενων μοντέλων.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 164 publications
(240 reference statements)
0
4
0
Order By: Relevance
“…Numerous metrics have been proposed for the estimation of semantic similarity between words (a more detailed analysis can be found here [33]). In this work we utilize corpus cooccurrence statistics and, specifically, the Dice coefficient D metric defined as follows…”
Section: Semantic Similarity Metricsmentioning
confidence: 99%
“…Numerous metrics have been proposed for the estimation of semantic similarity between words (a more detailed analysis can be found here [33]). In this work we utilize corpus cooccurrence statistics and, specifically, the Dice coefficient D metric defined as follows…”
Section: Semantic Similarity Metricsmentioning
confidence: 99%
“…The activation layer is motivated by the phenomenon of semantic priming (McNamara, 2005), especially for highly coherent lexical units, such as unigrams and bigrams. In the framework of DSMs, activation layers were computed for the case of unigrams in (Iosif and Potamianos, 2015), and were extended to short phrases (bigrams) in (Iosif, 2013). Consider a phrase, i = (i 1 i 2 ), where i 1 and i 2 denote its first and second constituent.…”
Section: Layer 1: Activation Modelmentioning
confidence: 99%
“…Such efforts proved to be effective when computing the similarity between twoword phrases, however, their limitations were revealed for the case of longer structures (Polajnar et al, 2014), where the composition of meaning becomes more complex. Bengio and Mikolov (2003;2013) proposed an approach based on deep learning for building language models that address the prob-lem of language creativity. The models appear to constantly gain support in comparison with the traditional DSMs.…”
Section: Introductionmentioning
confidence: 99%
“…Concept induction is realised by estimating the semantic similarity between the terminal tokens (words) that constitute the corpus vocabulary. Word similarities can be estimated by a variety of similarity metrics [18,19]. In this work, the distributional hypothesis of meaning (i.e., "similarity of context implies similarity of meaning" [20]) is adopted and the semantic similarity between two words is estimated as the Manhattan-norm of their respective bigram probability distributions of left and right contexts [8].…”
Section: The Bottom-up Approachmentioning
confidence: 99%