Proceedings of the 13th International Workshop on Semantic Evaluation 2019
DOI: 10.18653/v1/s19-2109
|View full text |Cite
|
Sign up to set email alerts
|

Fermi at SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media using Sentence Embeddings

Abstract: This paper describes our system (Fermi) for Task 6: OffensEval: Identifying and Categorizing Offensive Language in Social Media of SemEval-2019. We participated in all the three sub-tasks within Task 6. We evaluate multiple sentence embeddings in conjunction with various supervised machine learning algorithms and evaluate the performance of simple yet effective embedding-ML combination algorithms. Our team (Fermi)'s model achieved an F1-score of 64.40%, 62.00% and 62.60% for sub-task A, B and C respectively on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…Pelicon et al achieved a 0.8078 F-score with BERT [30]. In [31] with machine learning algorithms that were fed into multiple sentence embeddings 64.40% F-score was obtained. Results showed that the deep model dominated the SVM models with a 0.7793 F-score.…”
Section: Related Workmentioning
confidence: 99%
“…Pelicon et al achieved a 0.8078 F-score with BERT [30]. In [31] with machine learning algorithms that were fed into multiple sentence embeddings 64.40% F-score was obtained. Results showed that the deep model dominated the SVM models with a 0.7793 F-score.…”
Section: Related Workmentioning
confidence: 99%
“…Meme text: Global Vectors for Word Representation (GloVe) Word embeddings represent the semantic and syntactic meaning of the words as dense vector representations. It has improved the performance of several downstream tasks across various domains like text classification, machine comprehension etc., (Indurthi et al, 2019). GloVe (Pennington et al, 2014) embeddings based on twitter corpus was used for encoding the words of each meme text.…”
Section: Meme Images: Inception Networkmentioning
confidence: 99%
“…Offensive language identification has seen extensive usage of language modeling approaches like BERT (Pelicon et al, 2019;Pavlopoulos et al, 2019;Wu et al, 2019;Liu et al, 2019), GPT (Zampieri et al, 2019b) and ELMo (Indurthi et al, 2019) with varying hyperparameters and pre-processing steps. In this work, based on its widespread usage, BERT (Devlin et al, 2019) is used as the classifier.…”
Section: Classifiermentioning
confidence: 99%