Proceedings of the 11th International Conference on Advances in Information Technology 2020
DOI: 10.1145/3406601.3406624
|View full text |Cite
|
Sign up to set email alerts
|

Using LSTM for Context Based Approach of Sarcasm Detection in Twitter

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 10 publications
0
10
0
Order By: Relevance
“…Khotijah et al [40] (2020) applied Paragraph2vec to find the context in the tweets and then classified if the tweets were sarcastic or not using LSTM. Their method worked well for Indonesian tweets, but was found deficient owing to sparse dataset in case of English tweet.…”
Section: Figure 3 Data Collectionmentioning
confidence: 99%
“…Khotijah et al [40] (2020) applied Paragraph2vec to find the context in the tweets and then classified if the tweets were sarcastic or not using LSTM. Their method worked well for Indonesian tweets, but was found deficient owing to sparse dataset in case of English tweet.…”
Section: Figure 3 Data Collectionmentioning
confidence: 99%
“…The selection of n i and n k is not a trivial task since these two values determine the size and number of data samples available for the model to be trained with (refer to section Data Reconditioning: Augmentation through Windowing), which naturally has a direct effect on model performance and uncertainty. Considering the differing lengths of the time-series data sets for each simulation case, acknowledging that they have been truncated for each mixing system (τ f = 385 and τ f = 98 for the stirred and static mixer, respectively), the values for n i and n k were fixed to (50,50) and (40,30) for the stirred and static mixer, respectively. These values were subjected to an early sensitivity test, but a full-scale tuning process would be required to discover the optimal configuration for each mixing case study.…”
Section: ■ Methodologymentioning
confidence: 99%
“…Previous works have implemented techniques to circumvent this flaw, such as padding, truncation, or “attention” mechanisms. The former two are common approaches implemented in text recognition and image processing, where the length of the longest (padding) or shortest (truncation) sequence is set as the standard, and each sequence is either filled with zeros or has data removed accordingly . Despite the benefits of longer padded sequences (Khotijah et al demonstrated consistently higher model accuracy when handling longer sequences) or the easiness of dealing with truncating data sets, other studies have suggested alternatives (e.g., nearest neighbor interpolation) arguing that padding can be computationally demanding and naive truncation methods can lead to critical information loss .…”
Section: Methodsmentioning
confidence: 99%
“…Karthik Sundararajan and others proposed a sarcasm detecting approach and irony type using Multi-Rule Based Ensemble Feature Selection Model [17]. Siti Khotija, Khadijah Tirtawangsa, Arie A Suryani suggested a contextbased method to detect sarcasm in tweets based on Long Short-Term Memory (LSTM) [18]. According to another paper, classifier's performance is vital in sarcasm predictions in opinion mining [19].…”
Section: Literature Reviewmentioning
confidence: 99%