Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1174
|View full text |Cite
|
Sign up to set email alerts
|

IIT-TUDA at SemEval-2016 Task 5: Beyond Sentiment Lexicon: Combining Domain Dependency and Distributional Semantics Features for Aspect Based Sentiment Analysis

Abstract: This paper reports the IIT-TUDA participation in the SemEval 2016 shared Task 5 of Aspect Based Sentiment Analysis (ABSA) for subtask 1. We describe our system incorporating domain dependency graph features, distributional thesaurus and unsupervised lexical induction using an unlabeled external corpus for aspect based sentiment analysis. Overall, we submitted 29 runs, covering 7 languages and 4 different domains. Our system is placed first in sentiment polarity classification for the English laptop domain, Spa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0
1

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 42 publications
(26 citation statements)
references
References 16 publications
0
25
0
1
Order By: Relevance
“…Sentiment lexicons are a popular way to inject additional information into models for sentiment analysis. We experimented with using sentiment lexicons by Kumar et al (2016) but were not able to significantly improve upon our results with pre-trained embeddings 11 . In light of the diversity of domains in the context of aspect-based sentiment analysis and many other applications, domain-specific lexicons (Hamilton et al, 2016) are often preferred.…”
Section: Leveraging Additional Informationmentioning
confidence: 90%
See 1 more Smart Citation
“…Sentiment lexicons are a popular way to inject additional information into models for sentiment analysis. We experimented with using sentiment lexicons by Kumar et al (2016) but were not able to significantly improve upon our results with pre-trained embeddings 11 . In light of the diversity of domains in the context of aspect-based sentiment analysis and many other applications, domain-specific lexicons (Hamilton et al, 2016) are often preferred.…”
Section: Leveraging Additional Informationmentioning
confidence: 90%
“…While using pre-trained word embeddings is an 10 https://s3.amazonaws.com/aylien-main/ data/multilingual-embeddings/index.html effective way to mitigate this deficit, for highresource languages, solely leveraging unsupervised language information is not enough to perform onpar with approaches that make use of large external resources (Kumar et al, 2016) and meticulously hand-crafted features (Brun et al, 2016). Sentiment lexicons are a popular way to inject additional information into models for sentiment analysis.…”
Section: Leveraging Additional Informationmentioning
confidence: 99%
“…In order to keep our model simple and our results clear, we restrict our input representation to a sequence of word embeddings. While additional features such as Part-of-Speech (POS) tags are known to perform well in the domain of OTE extraction (Toh and Su, 2016;Kumar et al, 2016;Jebbara and Cimiano, 2016), they would require a separately trained model for POS-tag prediction which can not be assumed to be available for every language. We refrain from using more complex architectures such as memory networks as our goal is mainly to investigate the possibility of performing zero-shot cross-lingual transfer learning for OTE prediction.…”
Section: Approachmentioning
confidence: 99%
“…Most of the systems dedicated to ABSA use machine learning algorithms such as SVMs (Wagner et al, 2014;Kiritchenko et al, 2014), or CRFs (Toh and Wang, 2014;Hamdan et al, 2015), which are often combined with semantic lexical information, n-gram models, and sometimes more fine-grained syntactic or semantic information. For example, (Kumar et al, 2016) proposed a very efficient system on different languages of SemEval2016. The system use information extracted from dependency graphs and distributional thesaurus learned on the different domains and languages of the challenge.…”
Section: Related Workmentioning
confidence: 99%