Proceedings of the Sixth Workshop on Noisy User-Generated Text (W-Nut 2020) 2020
DOI: 10.18653/v1/2020.wnut-1.49
|View full text |Cite
|
Sign up to set email alerts
|

InfoMiner at WNUT-2020 Task 2: Transformer-based Covid-19 Informative Tweet Extraction

Abstract: Identifying informative tweets is an important step when building information extraction systems based on social media. WNUT-2020 Task 2 was organised to recognise informative tweets from noise tweets. In this paper, we present our approach to tackle the task objective using transformers. Overall, our approach achieves 10 th place in the final rankings scoring 0.9004 F1 score for the test set.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 21 publications
0
15
0
Order By: Relevance
“…The rest of the hyperparameters which we kept as constants are mentioned in the Appendix. When performing training, we trained five models with different random seeds and considered the majority-class self ensemble mentioned in Hettiarachchi and Ranasinghe (2020b) to get the final predictions.…”
Section: Hyperparameter Configurationsmentioning
confidence: 99%
“…The rest of the hyperparameters which we kept as constants are mentioned in the Appendix. When performing training, we trained five models with different random seeds and considered the majority-class self ensemble mentioned in Hettiarachchi and Ranasinghe (2020b) to get the final predictions.…”
Section: Hyperparameter Configurationsmentioning
confidence: 99%
“…al. [22] used transformers on COVID-19 tweets to classify informative and non-informative information. Another interesting application by Wang et.…”
Section: Related Workmentioning
confidence: 99%
“…Document classification can be considered as a sequence classification problem. According to recent literature, transformer architectures have shown promising results in this area (Ranasinghe et al, 2019b;Hettiarachchi and Ranasinghe, 2020).…”
Section: Subtask1: Document Classificationmentioning
confidence: 99%
“…In LM, we retrain the transformer model on the targeted dataset using the model's initial training objective before fine-tuning it for the downstream task. This step helps increase the model understanding of data (Hettiarachchi and Ranasinghe, 2020).…”
Section: Language Modelling (Lm)mentioning
confidence: 99%
See 1 more Smart Citation