Proceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries 2022
DOI: 10.1145/3529372.3530932
|View full text |Cite
|
Sign up to set email alerts
|

A domain-adaptive pre-training approach for language bias detection in news

Abstract: Media bias is a multi-faceted construct influencing individual behavior and collective decision-making. Slanted news reporting is the result of one-sided and polarized writing which can occur in various forms. In this work, we focus on an important form of media bias, i.e. bias by word choice. Detecting biased word choices is a challenging task due to its linguistic complexity and the lack of representative gold-standard corpora. We present DA-RoBERTa, a new state-of-the-art transformer-based model adapted to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 49 publications
0
1
0
Order By: Relevance
“…Reimers and Gurevych [60] show representations from BERT [14] can be improved with a Siamese architecture [7] when fine-tuned on semantic textual similarity datasets. Other approaches augment pre-trained models (e.g., BART [39], RoBERTa [42]) combining separate trained intermediate tasks and external knowledge sources to solve an additional final task, such as word sense disambiguation [73], paraphrase detection [71,72], fake news detection [70], and media bias detection [35,66]. Also, Cohan et al [12] use citations as a pretraining objective for a scientific BERT language model.…”
Section: Related Workmentioning
confidence: 99%
“…Reimers and Gurevych [60] show representations from BERT [14] can be improved with a Siamese architecture [7] when fine-tuned on semantic textual similarity datasets. Other approaches augment pre-trained models (e.g., BART [39], RoBERTa [42]) combining separate trained intermediate tasks and external knowledge sources to solve an additional final task, such as word sense disambiguation [73], paraphrase detection [71,72], fake news detection [70], and media bias detection [35,66]. Also, Cohan et al [12] use citations as a pretraining objective for a scientific BERT language model.…”
Section: Related Workmentioning
confidence: 99%