2018
DOI: 10.1371/journal.pone.0193919
|View full text |Cite
|
Sign up to set email alerts
|

Dependency-based Siamese long short-term memory network for learning sentence representations

Abstract: Textual representations play an important role in the field of natural language processing (NLP). The efficiency of NLP tasks, such as text comprehension and information extraction, can be significantly improved with proper textual representations. As neural networks are gradually applied to learn the representation of words and phrases, fairly efficient models of learning short text representations have been developed, such as the continuous bag of words (CBOW) and skip-gram models, and they have been extensi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(32 citation statements)
references
References 13 publications
0
29
0
1
Order By: Relevance
“…Although Benajiba et al [54] employed a similar network model to learn the similarity of semantic patterns, they utilized a regression function that is the mean squared error from the SQL structure distance to train the network model unlike [53]. The authors in [55] proposed a dependency-based Siamese LSTM network model, where they used the main and supporting components in sentence representation to create a difference for learning sentence representation. The authors in [56] aimed to learn thematic similarity between the sentences.…”
Section: Deep Metric Learning Problemsmentioning
confidence: 99%
“…Although Benajiba et al [54] employed a similar network model to learn the similarity of semantic patterns, they utilized a regression function that is the mean squared error from the SQL structure distance to train the network model unlike [53]. The authors in [55] proposed a dependency-based Siamese LSTM network model, where they used the main and supporting components in sentence representation to create a difference for learning sentence representation. The authors in [56] aimed to learn thematic similarity between the sentences.…”
Section: Deep Metric Learning Problemsmentioning
confidence: 99%
“…Classifying duplicate questions can be a tricky task since the variability of language makes it difficult to know the actual meaning of a sentence with certainty. This task is similar to the paraphrase identification problem, which is a thoroughly researched Natural Language Processing (NLP) task [4]. It uses Natural Language Sentence Matching (NLSM) to decide whether a pair of sentences with same intent is written in different words or not [5].…”
Section: Related Workmentioning
confidence: 99%
“…The reason for choosing Manhattan Distance among other similarity measure is that we are working on large set of word embedding consisting of multiple dimensions. It has been observed by many researchers that Manhattan distance similarity measure not only performs well on very high dimensional data but also takes less time for computation since Manhattan distance finds the similarity between textual features by calculating the absolute distance between two points that lies at axes of right angle [4], [6], [23]. Manhattan equation for two points x and y is shown in Equation 7…”
Section: ) Malstmmentioning
confidence: 99%
“…The process, as shown in Fig Secara tidak langsung model genism keyed vector ini akan membuat kamus yang menghasilkan dari library word2vec dan data teks yang dimiliki. Sehingga akan menghasilkan suatu kamus baru yang sesuai dengan kasus pada penelitian ini [18].…”
Section: A Word2vecunclassified