Authorship verification is a branch of forensic authorship analysis addressing the following task: Given a number of sample documents of an author A and a document allegedly written by A , the task is to decide whether the author of the latter document is truly A or not. We present a scalable authorship verification method that copes with this problem across different languages, genres and topics. The central concept of our method is a model, which is trained with Dutch, English, Greek, Spanish and German text documents. The model sets for each language specific parameters and a threshold that accepts or rejects the alleged author as A . The proposed method offers a wide range of benefits, e.g., a universal (static) threshold for each language and scalability regarding almost any involved component (classification function, ensemble strategy, features, etc.). Furthermore, the method benefits from low runtime due to the fact that no natural language processing techniques nor other computationally-intensive methods are involved. In our experiments, we applied the method on 28 test corpora including 4525 verification cases across 16 genres and a huge number of mixed topics, where we achieved competitive results (75% median accuracy). With these results we were able to outperform two state-of-the-art baselines, given the same training and test corpora
Authorship verification (AV) is a research subject in the field of digital text forensics that concerns itself with the question, whether two documents have been written by the same person. During the past two decades, an increasing number of proposed AV approaches can be observed. However, a closer look at the respective studies reveals that the underlying characteristics of these methods are rarely addressed, which raises doubts regarding their applicability in real forensic settings. The objective of this paper is to fill this gap by proposing clear criteria and properties that aim to improve the characterization of existing and future AV approaches. Based on these properties, we conduct three experiments using 12 existing AV approaches, including the current state of the art. The examined methods were trained, optimized and evaluated on three self-compiled corpora, where each corpus focuses on a different aspect of applicability. Our results indicate that part of the methods are able to cope with very challenging verification cases such as 250 characters long informal chat conversations (72.7% accuracy) or cases in which two scientific documents were written at different times with an average difference of 15.6 years (> 75% accuracy). However, we also identified that all involved methods are prone to cross-topic verification cases.
In this paper we present four informed natural language watermark embedding methods, which operate on the lexical and syntactic layer of German texts. Our scheme provides several benefits in comparison to state-of-the-art approaches, as for instance that it is not relying on complex NLP operations like full sentence parsing, word sense disambiguation, named entity recognition or semantic role parsing. Even rich lexical resources (e.g. WordNet or the Collins thesaurus), which play an essential role in many previous approches, are unnecessary for our system. Instead, our methods require only a Part-Of-Speech Tagger, simple wordlists that act as black-and whitelists and a trained classifier, which automatically predicts the ability of potential lexical or syntactic patterns to carry portions of the watermark message. Besides this, a part of the proposed methods can be easily adapted into other Indo-European languages, since the grammar rules the methods rely on are not restricted only to the German language. Because the methods perform only lexical and minor syntactic transformations, the watermarked text is not affected by grammatical distortion and simultaneously the meaning of the text is preserved in 82.14% of the cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.