Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021) 2021
DOI: 10.18653/v1/2021.semeval-1.64
|View full text |Cite
|
Sign up to set email alerts
|

SarcasmDet at SemEval-2021 Task 7: Detect Humor and Offensive based on Demographic Factors using RoBERTa Pre-trained Model

Abstract: This paper presents one of the top winning solution systems for task 7 at SemEval2021, "Ha-Hackathon: Detecting and Rating Humor and Offense". The shared task 7 consists of two parts, task-1 with three sub-tasks 1a,1b, and 1c, and task-2. The goal of task-1 is to predict if the text would be considered humorous or not, then if it is yes, predict how humorous it is and whether the humor rating would be perceived as controversial. The goal of task-2 is to predict how the text is considered offensive for users in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 14 publications
(15 citation statements)
references
References 20 publications
0
15
0
Order By: Relevance
“…New PLMs were introduced that outperform existing models to optimize existing models in various directions such as providing lite versions that reduce parameters, increase models speed, and reduce memory consumption (i.e., ALBERT [92] for BERT). Other alternatives focused on training ❒ ISSN: 2088-8708 strategies such as XLNet [93] which introduced an automatic regressive pre-training method [94]. This notable advancement in performance using transformers comes with many limitations including a negative impact of class imbalance [95], and the required intensive computation using large PLM models [96].…”
Section: Classification Modelsmentioning
confidence: 99%
“…New PLMs were introduced that outperform existing models to optimize existing models in various directions such as providing lite versions that reduce parameters, increase models speed, and reduce memory consumption (i.e., ALBERT [92] for BERT). Other alternatives focused on training ❒ ISSN: 2088-8708 strategies such as XLNet [93] which introduced an automatic regressive pre-training method [94]. This notable advancement in performance using transformers comes with many limitations including a negative impact of class imbalance [95], and the required intensive computation using large PLM models [96].…”
Section: Classification Modelsmentioning
confidence: 99%
“…In the next year, Faraj and Abdullah [19] published the best solution for the shared task on sentiment and sarcasm detection in the Arabic language. The objective global of the task is to identify whether a tweet is sarcastic or not.…”
Section: Related Workmentioning
confidence: 99%
“…Several researchers are attracted to detecting hate, harm, sarcasm, and satire on Social media (Faraj and Abdullah, 2021;Isaksen and Gambäck, 2020;Watanabe et al, 2018). Internet Memes are considered one of the most popular ways to communicate on all topics on social media.…”
Section: Related Workmentioning
confidence: 99%