EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020 2020
DOI: 10.4000/books.aaccademia.6769
|View full text |Cite
|
Sign up to set email alerts
|

UniBO @ AMI: A Multi-Class Approach to Misogyny and Aggressiveness Identification on Twitter Posts Using AlBERTo

Abstract: Welcome to EVALITA 2020! EVALITA is the evaluation campaign of Natural Language Processing and Speech Tools for Italian. EVALITA is an initiative of the Italian Association for Computational Linguistics (AILC, http://www.ai-lc.it) and it is endorsed by the Italian Association for Artificial Intelligence (AIxIA, http://www.aixia.it) and the Italian Association for Speech Sciences (AISV, http://www.aisv.it).This volume includes the reports of both task organisers and participants to all of the EVALITA 2020 chall… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…Abusive Language. Typically, abusive language refers to a wide range of concepts (Balayn et al, 2021;Poletto et al, 2021), including hate speech (Yin and Zubiaga, 2021;Alkomah and Ma, 2022;Jain and Sharma, 2022), profanity (Soykan et al, 2022), aggressive language (Muti et al, 2022;Kanclerz et al, 2021), offensive language (Pradhan et al, 2020;Kogilavani et al, 2021), cyberbullying (Rosa et al, 2019) and misogyny (Shushkevich and Cardiff, 2019). Pamungkas et al (2023) overview recent research across domains and languages.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Abusive Language. Typically, abusive language refers to a wide range of concepts (Balayn et al, 2021;Poletto et al, 2021), including hate speech (Yin and Zubiaga, 2021;Alkomah and Ma, 2022;Jain and Sharma, 2022), profanity (Soykan et al, 2022), aggressive language (Muti et al, 2022;Kanclerz et al, 2021), offensive language (Pradhan et al, 2020;Kogilavani et al, 2021), cyberbullying (Rosa et al, 2019) and misogyny (Shushkevich and Cardiff, 2019). Pamungkas et al (2023) overview recent research across domains and languages.…”
Section: Background and Related Workmentioning
confidence: 99%
“…For Task antiLGBT , we adopt a multi-class approach with mutually exclusive categories with three output units. This approach is based on the top-performing model (Muti and Barrón-Cedeño, 2020) at the AMI shared task on the identification of misogynous and aggressive tweets (Elisabetta Fersini, 2020). No external data is considered in this model.…”
Section: Task Antilgbtmentioning
confidence: 99%
“…The output for each classifier is a sigmoid function too. We opt for this approach after observing that treating the classes separately increased the performance in a multi-class model predicting misogynous, misogynous-aggressive or none (Muti and Barrón-Cedeño, 2020). This approach allows us to predict multiple mutually non-exclusive classes.…”
Section: System Overviewmentioning
confidence: 99%
“…The task is more challenging than when dealing with text alone because, in general, both the textual and the visual channels play an indivisible role in conveying the desired message. 1 We build upon our previous experience in identifying misogyny and aggressiveness in text (Muti and Barrón-Cedeño, 2020) and approach both multimodal tasks with a supervised multi-modal bitransformer model (MMBT) (Kiela et al, 2020a). We use bert-base-uncased-hatexplain (Mathew et al, 2020) and bert-base-uncased (Devlin et al, 2019) for the textual embeddings, and CLIP (Radford et al, 2021) for the visual ones.…”
Section: Introductionmentioning
confidence: 99%