Proceedings of the 11th Forum for Information Retrieval Evaluation 2019
DOI: 10.1145/3368567.3368588
|View full text |Cite
|
Sign up to set email alerts
|

Ciq@fire

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Improvement was noted when attention layers and part-of-speech vector representations were incorporated into the design process. The second approach, an ensemble of various machine learning, neural networks, and transformer-based models, provided the best overall performance. HateMonitors (Saha et al 2019) : To detect abusive content using pre-trained BERT and LASER sentence embeddings, the authors used zero-shot transfer learning and pre-trained sentence embeddings. For the system to be language-independent, they employed the Gradient Boosting model coupled with BERT and LASER embeddings. 3Idiots (Mishra and Mishra 2019) : BERT-based neural network models were used to refine the pre-trained monolingual and multi-lingual transformer models.…”
Section: Datasets and Experimental Settingmentioning
confidence: 99%
See 1 more Smart Citation
“…Improvement was noted when attention layers and part-of-speech vector representations were incorporated into the design process. The second approach, an ensemble of various machine learning, neural networks, and transformer-based models, provided the best overall performance. HateMonitors (Saha et al 2019) : To detect abusive content using pre-trained BERT and LASER sentence embeddings, the authors used zero-shot transfer learning and pre-trained sentence embeddings. For the system to be language-independent, they employed the Gradient Boosting model coupled with BERT and LASER embeddings. 3Idiots (Mishra and Mishra 2019) : BERT-based neural network models were used to refine the pre-trained monolingual and multi-lingual transformer models.…”
Section: Datasets and Experimental Settingmentioning
confidence: 99%
“…HateMonitors (Saha et al 2019) : To detect abusive content using pre-trained BERT and LASER sentence embeddings, the authors used zero-shot transfer learning and pre-trained sentence embeddings. For the system to be language-independent, they employed the Gradient Boosting model coupled with BERT and LASER embeddings.…”
Section: Datasets and Experimental Settingmentioning
confidence: 99%