2021
DOI: 10.20944/preprints202101.0302.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media

Abstract: Sarcasm is a linguistic expression often used to communicate the opposite of what is said, usually something that is very unpleasant with an intention to insult or ridicule. Inherent ambiguity in sarcastic expressions, make sarcasm detection very difficult. In this work, we focus on detecting sarcasm in textual conversations from various social networking platforms and online media. To this end, we develop an interpretable deep learning model using multi-head self-attention and gated recurrent units. Multi-hea… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 28 publications
0
6
0
1
Order By: Relevance
“…Attention mechanism finds pattern in the input that is critical for resolving the provided process. In DL method, self-attention [18] is an attention method for sequence that assists in learning the task-specific relationships among distinct components of a provided sequence for producing a good sequence representation. During the self-attention model, 3 linear predictions: Key (K), Value (V), and Query (Q) of the provided input sequence is produced in which, Q;…”
Section: Mhsa-gru Based Sarcasm Classificationmentioning
confidence: 99%
“…Attention mechanism finds pattern in the input that is critical for resolving the provided process. In DL method, self-attention [18] is an attention method for sequence that assists in learning the task-specific relationships among distinct components of a provided sequence for producing a good sequence representation. During the self-attention model, 3 linear predictions: Key (K), Value (V), and Query (Q) of the provided input sequence is produced in which, Q;…”
Section: Mhsa-gru Based Sarcasm Classificationmentioning
confidence: 99%
“…A better solution is lower error rate and the worst solution gains an enhanced error rate. Finally, a detailed comparison study of the ASDC-HPTDL model with existing techniques is in Table 2 (Akula & Garibay, 2021).…”
Section: Iafo Based Hyperparameter Tuningmentioning
confidence: 99%
“…Particularly, they utilize bi-directional gated RNN for capturing syntactic and semantic data on twitters, and a pooling NN for extracting contextual features manually from past twitters. Akula and Garibay [17] concentrate on identifying sarcasm in textual conversation from different societal and online platforms for networking. Eventually, they developed an interpretable DL method with gated recurrent units and multihead self-attention.…”
Section: Literature Reviewmentioning
confidence: 99%