2017
DOI: 10.3233/ida-160048
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised event exploration from social text streams

Abstract: Abstract. Social media provides unprecedented opportunities for people to disseminate information and share their opinions and views online. Extracting events from social media platforms such as Twitter could help in understanding what is being discussed. However, event extraction from social text streams poses huge challenges due to the noisy nature of social media posts and dynamic evolution of language. We propose a generic unsupervised framework for exploring events on Twitter which consists of four major … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…In event modeling, researchers could not represent events formally without a structured definition of events, which some found in the ‘four Ws', or Who did What, Where and When (Chen and Li, 2020 ). To expect event tracking algorithms to model events might appear unrealistic, but we could express the ground truth in terms of the “four Ws.” Similarly to Zhou et al ( 2015 , 2017 ), we could expect a document-pivot technique's clusters to describe Who did What, Where and When, or a feature-pivot technique's keywords to capture them.…”
Section: Discussionmentioning
confidence: 98%
See 1 more Smart Citation
“…In event modeling, researchers could not represent events formally without a structured definition of events, which some found in the ‘four Ws', or Who did What, Where and When (Chen and Li, 2020 ). To expect event tracking algorithms to model events might appear unrealistic, but we could express the ground truth in terms of the “four Ws.” Similarly to Zhou et al ( 2015 , 2017 ), we could expect a document-pivot technique's clusters to describe Who did What, Where and When, or a feature-pivot technique's keywords to capture them.…”
Section: Discussionmentioning
confidence: 98%
“…Chakrabarti and Punera (2011) devised five labels to describe the state of American football games, plays within those games, and other general comments. Zhou et al (2015Zhou et al ( , 2017 presented slightly more rigid rules based on Who does What, Where and When, but such examples appear scarcely. Most researchers leave the labeling process to the discretion of the annotators, and in discretion, subjectivity prevails.…”
Section: Human Error and Biasmentioning
confidence: 99%
“…• LEM [5], is a bayesian modeling approach for open domain event extraction. It treats an event as a latent variable and models the generation of an event as a joint distribution of its individual event elements (organization , location , person , keyword) 7 . We implement the algorithm with the default configuration.…”
Section: Methodsmentioning
confidence: 99%
“…Topic models [1,2] underpin many successful applications within the field of Natural Language Processing (NLP). Variants of topic models have been proposed for different tasks including content analysis of e-petitions [3], topic-associated sentiment analysis [4], event extraction from social media [5,6,7] and product aspect mining [8]. However, topic models typically rely on mean-field variational inference [9] or collapsed Gibbs sampling for model learning.…”
Section: Introductionmentioning
confidence: 99%
“…The results show that detection, extraction, and classification of emotions, feelings, and opinions are the main applications coded as private states analysis. This category records 33 techniques, algorithms, or methods for performing [125] (TSSE) Tweet Sentiment Score Estimator [104] (BM) Naive Bayes, Bayesian Logistic [126]; [127] (LSA) Latent Semantic Analysis [128] (LIWC) Linguistic Inquiry and Word Count [129]; [130]; [131] (SANT) Sociological Approach to handling Noisy and short Texts [132] (SC) Sarcasm (TPR) True Positive Ratio [92] (SVM) Support Vector Machine [92] (LRS) Linguistic Rules Sarcasm [133]; [124] (TC) Text Classification (SVM) Support Vector Machine [134]; [135] (ENS) Ensemble Classifiers [135]; [136] (LECM) Latent Event Category Model [137] (BM) Naive Bayes, Bayesian Logistic [137]; [138] (RF) Random Forest [139] (LR) Logistic Regression [140] (SE) Search (FL) Fuzzy Logic [141] (TF-IDF) Term Frecuency [142]; [143] (KB) Knowledge Base (ON) Ontologias [144]; [145]; [95]; [146]; [147] (SI) Social Influence (PN) Proximity Networks [102] (PR) Pagerank [113] (ST) Statistical techniques [100] (BM) Naive Bayes, Bayesian Logistic [111] (DF) Difussion (RM) Rumor Model [117] (BM) Naive Bayes, Bayesian Logistic [127] (ST) Statistical techniques [122] (VAM) Vector Autoregressive Model…”
Section: B Subjectivity Analysismentioning
confidence: 99%