Proceedings of the Third Joint Conference on Lexical and Computational Semantics (*SEM 2014) 2014
DOI: 10.3115/v1/s14-1003
|View full text |Cite
|
Sign up to set email alerts
|

Improvement of a Naive Bayes Sentiment Classifier Using MRS-Based Features

Abstract: This study explores the potential of using deep semantic features to improve binary sentiment classification of paragraphlength movie reviews from the IMBD website. Using a Naive Bayes classifier as a baseline, we show that features extracted from Minimal Recursion Semantics representations in conjunction with back-off replacement of sentiment terms is effective in obtaining moderate increases in accuracy over the baseline's n-gram features. Although our results are mixed, our most successful feature combinati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…Parsing and generation are thus performed using already existing DELPH-IN tools. DMRS has already been used in other systems for prepositional phrase attachment disambiguation [36], for machine translation [37], for question generation [38], for evaluating multimodal deep learning models [39], and for sentiment analysis [40].…”
Section: Grass Theoretical Foundationsmentioning
confidence: 99%
“…Parsing and generation are thus performed using already existing DELPH-IN tools. DMRS has already been used in other systems for prepositional phrase attachment disambiguation [36], for machine translation [37], for question generation [38], for evaluating multimodal deep learning models [39], and for sentiment analysis [40].…”
Section: Grass Theoretical Foundationsmentioning
confidence: 99%
“…where m is the number of features, a i is the value of the i th feature, L is the set of all class labels, c represents the value that the class variable can take, and f i is the frequency count of the word a i in a document d. The prior probability p(c) and the conditional probability p(a i |c) are generally estimated by (2) and 3, respectively.…”
Section: Related Work a Multinomial Naïve Bayes Text Classifiermentioning
confidence: 99%
“…Automatic text classification is used to automatically assign a textual document to a pre-specified set of classes, which can help people retrieve, query, and utilize information. Current common text classification algorithms include [8]: Naïve Bayes [2], K-nearest neighbors [3], decision trees [4], support vector machine (SVM) [5], and recent deep learning methods such as convolutional neural networks (CNNs) [6], recurrent neural networks (RNNs), and so on [7], [28].…”
Section: Introductionmentioning
confidence: 99%
“…Machine learning methods use several learning algorithms to determine the sentiment by training on a known dataset. Many of them rely on very basic classifiers, e.g., Naïve Bayes [24] or Support Vector Machines [31]. They are trained on a particular dataset using features such as bag of words or bigrams, and with or without part-of-speech (PoS) tags.…”
Section: B Sentiment Analysis and Natural Language Processingmentioning
confidence: 99%