2015
DOI: 10.1016/j.ipm.2015.05.005
|View full text |Cite
|
Sign up to set email alerts
|

Sentiment analysis meets social media – Challenges and solutions of the field in view of the current information sharing context

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(10 citation statements)
references
References 3 publications
0
10
0
Order By: Relevance
“…As the technology is further refined, standardization of methodology and the establishment of healthcare specific SA methods (either ML algorithms or a medical‐sentiment lexicon) may facilitate the development of further validity regarding the application of this technology to the health care sector 41,42 …”
Section: Discussionmentioning
confidence: 99%
“…As the technology is further refined, standardization of methodology and the establishment of healthcare specific SA methods (either ML algorithms or a medical‐sentiment lexicon) may facilitate the development of further validity regarding the application of this technology to the health care sector 41,42 …”
Section: Discussionmentioning
confidence: 99%
“…Instead, the "triggers" of those emotions are written. The challenge is identifying terms that act as the trigger and associating them with an emotional label [13].…”
Section: Processing Natural Languagementioning
confidence: 99%
“…There are also massive negative implications to the character limit imposed by Twitter. The "brevity" of tweets requires users to include non-standard abbreviations, typos, irony, and trending topics called hashtags [13]. Such unconventional and unstructured texts are considered to be 'noise' as natural language processing (NLP) software does not handle such information so well, creating problems for Twitter content analysis [15].…”
Section: Processing Natural Languagementioning
confidence: 99%
“…Some studies, such as the work of Aidan, Kushmerick, and Smyth (2002), classify features of opinionated documents into two categories: those that depend on the query and incorporate relevance and opinion into the learning phase (Saif et al, 2014;Seki, Kino, Sato, & Uehara, 2007), and those that use characteristics independent of the topic and do not incorporate relevance into the learning phase. Furthermore, while some studies use a single classifier like support vector machine (SVM), naive bayes or logistic regression to return opinionated documents, others use multiple different classifiers to compare their impacts on opinion detection (Balahur, 2016;Balahur & Jacquet, 2015;Bauman, Liu, & Tuzhilin, 2016;Fu, Abbasi, Zeng, & Chen, 2012;Lu, Mamoulis, Pitoura, & Tsaparas, 2016;Mullen & Collier, 2004;Pang & Lee, 2004;Riloff & Wiebe, 2003;Seki et al, 2007;Tu, Cheung, Mamoulis, Yang, & Lu, 2016). Finally, some pre-existing approaches use internal collections built directly from the collection to be analyzed for collections training, while others use external collections built from independent collections of the analyzed collection (Aidan et al, 2002;Baccianella, Esuli, & Sebastiani, 2010;Bifet & Frank, 2010;Pak & Paroubek, 2010;Seki et al, 2007).…”
Section: Related Workmentioning
confidence: 99%