Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages 2022
DOI: 10.18653/v1/2022.dravidianlangtech-1.32
|View full text |Cite
|
Sign up to set email alerts
|

DLRG@DravidianLangTech-ACL2022: Abusive Comment Detection in Tamil using Multilingual Transformer Models

Abstract: Online Social Network has let people connect and interact with each other. It does, however, also provide a platform for online abusers to propagate abusive content. The majority of these abusive remarks are written in a multilingual style, which allows them to easily slip past internet inspection. This paper presents a system developed for the Shared Task on Abusive Comment Detection (Misogyny, Misandry, Homophobia, Transphobic, Xenophobia, Coun-terSpeech, Hope Speech) in Tamil Dravidi-anLangTech@ACL 2022 to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…As can be seen in Table 5, adapter-based MuRIL (Large) obtained better accuracy than other proposed and existing models. Even though, XLM-RoBERTa, proposed by [58] gives better performance than models proposed in the current study, MuRIL-large has outperformed this But, as shown in Table 5, the models proposed in [57] have given better performance than proposed To justify the performance of the proposed models, we have compared the parameter e ciency of the transformer based models and the results are discussed in Section 6.3.…”
Section: Performance Comparisonmentioning
confidence: 69%
“…As can be seen in Table 5, adapter-based MuRIL (Large) obtained better accuracy than other proposed and existing models. Even though, XLM-RoBERTa, proposed by [58] gives better performance than models proposed in the current study, MuRIL-large has outperformed this But, as shown in Table 5, the models proposed in [57] have given better performance than proposed To justify the performance of the proposed models, we have compared the parameter e ciency of the transformer based models and the results are discussed in Section 6.3.…”
Section: Performance Comparisonmentioning
confidence: 69%