2018
DOI: 10.1080/08839514.2018.1451095
|View full text |Cite
|
Sign up to set email alerts
|

Subjective Evaluation: A Comparison of Several Statistical Techniques

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…For instance, finding similar words based on their frequent collocation. The following algorithms and methods are considered as text corpus semantic similarity www.aetic.theiaer.org algorithms: Hyperspace Analogue to Language (HAL) [33], Latent Semantic Analysis (LSA) [34], Generalized Latent Semantic Analysis (GLSA) [35], Explicit Semantic Analysis (ESA) [36], Pointwise Mutual Information -Information Retrieval (PMI-IR) [37], Second-order co-occurrence point wise mutual information (SCO-PMI) [38], Normalized Google Distance (NGD) [39] and Extracting DIStributionally similar words using COoccurrences (DISCO) [40]. For these algorithms to work accurately and effectively, a vast and clean textual corpus must be used, and they deduce similarity based on textual collocations.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, finding similar words based on their frequent collocation. The following algorithms and methods are considered as text corpus semantic similarity www.aetic.theiaer.org algorithms: Hyperspace Analogue to Language (HAL) [33], Latent Semantic Analysis (LSA) [34], Generalized Latent Semantic Analysis (GLSA) [35], Explicit Semantic Analysis (ESA) [36], Pointwise Mutual Information -Information Retrieval (PMI-IR) [37], Second-order co-occurrence point wise mutual information (SCO-PMI) [38], Normalized Google Distance (NGD) [39] and Extracting DIStributionally similar words using COoccurrences (DISCO) [40]. For these algorithms to work accurately and effectively, a vast and clean textual corpus must be used, and they deduce similarity based on textual collocations.…”
Section: Related Workmentioning
confidence: 99%
“…Evaluation interest of consider and exactness come to be used, and as a quit result, the contraption's common overall to be 50.9% with and 48.7% without the opportunity of revel being made. Xia et al [8] mixed the word2vec structure with the undermining report corpus to see the worth in similarities among stand-separated rule records. As a quit outcome, word2vec managed the accuracy with the asset of the use of 0.2 in evaluation to the sack of terms structure, that might in basically a near way whenever at some point at long last be associated with the asset of the utilization of 0.05-0.10 through setting up the word2vec model on rule stories.…”
Section: Literature Surveymentioning
confidence: 99%
“…It is based on keyword matching and is considered poor as it cannot tackle problems such as synonyms or take the context into account. Several works have been done on subjective paper evaluation using this approach [9] [10].…”
Section: ) Statistical Techniquementioning
confidence: 99%