2015
DOI: 10.1016/j.ipm.2014.09.004
|View full text |Cite
|
Sign up to set email alerts
|

POS-RS: A Random Subspace method for sentiment classification based on part-of-speech analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 73 publications
(23 citation statements)
references
References 52 publications
0
23
0
Order By: Relevance
“…We used terms (unigrams) as input features in our experiments. However, other features like n-grams (Vinodhini and Chandrasekaran, 2017), Part-of-Speech (Wang et al, 2015), Joint Sentiment Topic (He et al, 2011) or improvements in the quality of features (Xia et al, 2016) could also open new possibilities of investigation.…”
Section: Discussionmentioning
confidence: 99%
“…We used terms (unigrams) as input features in our experiments. However, other features like n-grams (Vinodhini and Chandrasekaran, 2017), Part-of-Speech (Wang et al, 2015), Joint Sentiment Topic (He et al, 2011) or improvements in the quality of features (Xia et al, 2016) could also open new possibilities of investigation.…”
Section: Discussionmentioning
confidence: 99%
“…After that, the majority voting is utilized as an ensemble strategy. Random Subspace [3,16,18,19,25,27,13,33,37] exploits fairly simple randomness approach for the feature selection. Training is done with a subset of the original feature space instead of including all features for each base learner in the ensemble.…”
Section: Ensemble Of Classifiersmentioning
confidence: 99%
“…Ensemble algorithms used in this paper are briefly mentioned. Bagging [1,12,[15][16][17][18][19]: Bagging produces new training dataset (bootstrap samples) from the original dataset by using replacement. In other words, the multiple versions are composed thereby performing bootstrap replicates of the training set and employing them as new training sets.…”
Section: Introductionmentioning
confidence: 99%
“…Then, the classifier is constituted on each of these samples and associated them with majority voting. Random Subspace [1,[16][17][18][20][21][22]: The idea behind of this approach is quite simple. Random subspace method aims to train with a subset of the original feature space instead of using extended version.…”
Section: Introductionmentioning
confidence: 99%