Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021) 2021
DOI: 10.18653/v1/2021.semeval-1.161
|View full text |Cite
|
Sign up to set email alerts
|

YoungSheldon at SemEval-2021 Task 7: Fine-tuning Is All You Need

Abstract: In this paper, we describe our system used for SemEval 2021 Task 7: HaHackathon: Detecting and Rating Humor and Offense. We used a simple fine-tuning approach using different Pre-trained Language Models (PLMs) to evaluate their performance for humor and offense detection. For regression tasks, we averaged the scores of different models leading to better performance than the original models. We participated in all SubTasks. Our best performing system was ranked 4 in SubTask 1-b, 8 in SubTask 1-c, 12 in SubTask … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…Finetuning (Qiu et al, 2020) pre-trained language models has become a popular approach in the deep learning community (Sharma et al, 2021b). It is a form of transfer learning that utilizes models trained on enormous amounts of unannotated text data to learn general-purpose representations.…”
Section: Introductionmentioning
confidence: 99%
“…Finetuning (Qiu et al, 2020) pre-trained language models has become a popular approach in the deep learning community (Sharma et al, 2021b). It is a form of transfer learning that utilizes models trained on enormous amounts of unannotated text data to learn general-purpose representations.…”
Section: Introductionmentioning
confidence: 99%
“…This information is used in downstream tasks by simply finetuning task-specific datasets. PLMs have shown remarkable performance on such downstream tasks using the simple finetuning approach (Sharma et al, 2021b). We extend the same idea to identify sarcasm (Subtask A) and identify the type of irony (Subtask B).…”
Section: Introductionmentioning
confidence: 99%
“…These models can be finetuned on various downstream tasks using task-specific datasets. Finetuning allows models to adapt to small task-specific datasets easily and shows promising results (Sharma et al, 2021b). Next, we provide a summary of PLMs used in our approach.…”
Section: Introductionmentioning
confidence: 99%