2021
DOI: 10.1109/access.2021.3135807
|View full text |Cite
|
Sign up to set email alerts
|

Comparative Analysis of Word Embeddings in Assessing Semantic Similarity of Complex Sentences

Abstract: Semantic textual similarity is one of the open research challenges in the field of Natural Language Processing. Extensive research has been carried out in this field and near-perfect results are achieved by recent transformer-based models in existing benchmark datasets like the STS dataset and the SICK dataset. In this paper, we study the sentences in these datasets and analyze the sensitivity of various word embeddings with respect to the complexity of the sentences. In this article, we build a complex senten… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…Furthermore, a multimodal embedding method is proposed to address complexity classification in terms of semantic, syntactic, and lexical perspectives simultaneously, replicating the research of Gargiulo et al, [34]. The dependency tree and part-ofspeech embeddings are used for the syntactic complexity classification, along with the linguistic rules, while word embeddings are used to address the semantic complexity classification, as studied by Chandrasekaran and Mago [35], and also to address the lexical complexity. Prior to the complexity classification, fundamental NLP pre-processing operations such as lower casing, stop word filtering, and tokenization is performed.…”
Section: Hybrid Personalized Text Simplification Frameworkmentioning
confidence: 98%
“…Furthermore, a multimodal embedding method is proposed to address complexity classification in terms of semantic, syntactic, and lexical perspectives simultaneously, replicating the research of Gargiulo et al, [34]. The dependency tree and part-ofspeech embeddings are used for the syntactic complexity classification, along with the linguistic rules, while word embeddings are used to address the semantic complexity classification, as studied by Chandrasekaran and Mago [35], and also to address the lexical complexity. Prior to the complexity classification, fundamental NLP pre-processing operations such as lower casing, stop word filtering, and tokenization is performed.…”
Section: Hybrid Personalized Text Simplification Frameworkmentioning
confidence: 98%
“…Hence, it is necessary to convert the text samples into numerical vectors. This conversion of raw text data into numerical values is called a word embedding technique [31]. There are two types of word embedding techniques: Frequency-based and Prediction-based.…”
Section: A Architecture Of Proposed and Related Modelsmentioning
confidence: 99%