2020
DOI: 10.22266/ijies2020.0229.15
|View full text |Cite
|
Sign up to set email alerts
|

A Neural Network Model for Efficient Antonymy-Synonymy Classification by Exploiting Co-occurrence Contexts and Word-Structure Patterns

Abstract: Antonymy and synonymy are basic semantic relations between words. Automatically distinguishing between antonymy and synonymy is an important task in natural language processing. This task is hard because antonyms and synonyms tend to occur in highly similar contexts. Recent studies often focus on exploiting densevector representations of words to deal with this problem. In this paper, we present a study on antonymy-synonymy discrimination for the Vietnamese language. We propose a deep neural network model (DVA… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
7
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…The deep learning architecture that mainly used is a Convolutional Neural Network (CNN) [6], Recursive Neural Network (RNN) [7,8] and the combination of both [3][4][5]. The most commonly RNN used is Long Short-Term Memory (LSTM).…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…The deep learning architecture that mainly used is a Convolutional Neural Network (CNN) [6], Recursive Neural Network (RNN) [7,8] and the combination of both [3][4][5]. The most commonly RNN used is Long Short-Term Memory (LSTM).…”
Section: Introductionmentioning
confidence: 99%
“…Then, the highest layer classifies the score using fully connected that is obtained from the important local feature. Bui, et al [8] propose a deep neural network approach for the classification of antonyms and synonyms using co-occurrence context and word structure. Both co-occurrence context and word structure, each word in sentences are represented by a vector of word embedding and part of speech (POS) representation.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations