2020
DOI: 10.1007/978-3-030-58219-7_25
|View full text |Cite
|
Sign up to set email alerts
|

Overview of PAN 2020: Authorship Verification, Celebrity Profiling, Profiling Fake News Spreaders on Twitter, and Style Change Detection

Abstract: We briefly report on the four shared tasks organized as part of the PAN 2020 evaluation lab on digital text forensics and authorship analysis. Each tasks is introduced, motivated, and the results obtained are presented. Altogether, the four tasks attracted 228 registrations, yielding 82 successful submissions. This, and the fact that we continue to invite the submissions of software rather than its run output using the TIRA experimentation platform, marks for a good start into the second decade of PAN evaluati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
20
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(21 citation statements)
references
References 10 publications
1
20
0
Order By: Relevance
“…BERT/RoBERTa outperformed the T5 model on some styles (e.g., 'sarcasm' and 'methaphor'). Other related tasks are the PAN Authorship Verification (Kestemont et al, 2020) and Style Change Detection tasks which aim at identifying whether two documents or consec- In that time, it has become one of the world's most significant financial and cultural capital cities.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…BERT/RoBERTa outperformed the T5 model on some styles (e.g., 'sarcasm' and 'methaphor'). Other related tasks are the PAN Authorship Verification (Kestemont et al, 2020) and Style Change Detection tasks which aim at identifying whether two documents or consec- In that time, it has become one of the world's most significant financial and cultural capital cities.…”
Section: Related Workmentioning
confidence: 99%
“…There are several general evaluation benchmarks for different linguistic phenomena (e.g., Wang et al (2018Wang et al ( , 2019)) but less emphasis has been put on linguistic style. Nevertheless, natural language processing literature shows a variety of approaches for the evaluation of style measuring methods: They have been tested on whether they group texts by the same authors together (Hay et al, 2020;Bevendorff et al, 2020), whether they can correctly classify the style for ground truth datasets Kang and Hovy, 2021) and whether 'similar style words' are similarly represented (Akama et al, 2018). However, these evaluation approaches are (i) often application-specific, (ii) rarely used to compare different style methods, (iii) usually do not control for content and (iv) often do not test for fine-grained style differences.…”
Section: Introductionmentioning
confidence: 99%
“…The PAN series of shared tasks is considered one of the most important benchmarks and references for authorship attribution research. The PAN authorship verification tasks [17,33,32,4] tackle what Koppel et al [20] called the "fundamental problem" in authorship attribution: Given two documents, are they written by the same author? Bevendorff et al [5] review the PAN authorship verification task and state that the experiment…”
Section: Related Workmentioning
confidence: 99%
“…SPATIUM-L1 4 is an unsupervised authorship verification model developed by Kocher and Savoy [18]. It was submitted to the PAN at CLEF 2015 Author Identification task, and placed 4 th in the evaluation on English language.…”
Section: Spatium-l1mentioning
confidence: 99%
“…An article's stance can either agree or disagree with the headline, discuss the same topic, or be completely unrelated. An existing data set for stance classification [91] Profiling Fake News Spreaders on Twitter (https://pan.webis.de/clef20/pan20-we b/author-profiling.html, accessed on 3 June 2021) [114] was a shared task organized within CLEF 2020 in the context of PAN, a series of scientific events, and shared tasks on digital text forensics and stylometry. The particularity of this task is that it does not properly try to detect fake news but to detect whether a Twitter user is a potential propagator of fake news.…”
mentioning
confidence: 99%