2021
DOI: 10.48550/arxiv.2106.10487
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Transformers for Headline Selection for Russian News Clusters

Abstract: In this paper, we explore various multilingual and Russian pre-trained transformer-based models for the Dialogue Evaluation 2021 shared task on headline selection. Our experiments show that the combined approach is superior to individual multilingual and monolingual models. We present an analysis of a number of ways to obtain sentence embeddings and learn a ranking model on top of them. We achieve the result of 87.28% and 86.60% accuracy for the public and private test sets respectively.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 1 publication
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?