2018
DOI: 10.1007/s00521-018-3477-2
|View full text |Cite
|
Sign up to set email alerts
|

A novel feature extraction methodology for sentiment analysis of product reviews

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0
3

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(8 citation statements)
references
References 25 publications
0
5
0
3
Order By: Relevance
“…When comparing the BiGRU RNN network with other neural networks such as Convolutional Neural Network (CNN), Long Short-term Memory (LSTM), and CNN+LSTM, our BiGRULA achieved better accuracy with 0.894 over the test dataset than the other three network models with accuracies of 0.881, 0.812, and 0.858, respectively. Table 2 also shows the comparison accuracy values with other known machine learning approaches as available in literature [6,24,25] using the IMDB dataset. FPCD feature vectors combined with the generalized TF_IDF vectors + Naïve Bayes (G_TF-IDF + FPCD + NB), Word2vec + K-Nearest Neighbor (Word2vec + KNN), and frequent, pseudo-consecutive phrase feature with high discriminative ability + Support Vector Machine (FPCD + SVM) achieved the highest accuracy among their feature extraction methods, while, compared to our model, their accuracy values still could not compare.…”
Section: Results and Analysismentioning
confidence: 99%
“…When comparing the BiGRU RNN network with other neural networks such as Convolutional Neural Network (CNN), Long Short-term Memory (LSTM), and CNN+LSTM, our BiGRULA achieved better accuracy with 0.894 over the test dataset than the other three network models with accuracies of 0.881, 0.812, and 0.858, respectively. Table 2 also shows the comparison accuracy values with other known machine learning approaches as available in literature [6,24,25] using the IMDB dataset. FPCD feature vectors combined with the generalized TF_IDF vectors + Naïve Bayes (G_TF-IDF + FPCD + NB), Word2vec + K-Nearest Neighbor (Word2vec + KNN), and frequent, pseudo-consecutive phrase feature with high discriminative ability + Support Vector Machine (FPCD + SVM) achieved the highest accuracy among their feature extraction methods, while, compared to our model, their accuracy values still could not compare.…”
Section: Results and Analysismentioning
confidence: 99%
“…The n-gram feature extraction used in this study does not consider the semantic similarity or the discriminative ability of words. Therefore, enhanced n-gram representations [29] are recommended to reduce the dimensionality and sparsity of the data. The application of an effective feature selection method may also lead to lower computational complexity and improved time efficiency [30].…”
Section: Discussionmentioning
confidence: 99%
“…As with similar approaches which extract features from review data [90], [91], non-domain-dependent phrases are removed. These phrases are removed as they do not relate to the needs of the product type being searched for.…”
Section: ) Removal Of Non-discriminate Phrasesmentioning
confidence: 99%