We describe the first automatic approach for merging coreference annotations obtained from multiple annotators into a single gold standard. This merging is subject to certain linguistic hard constraints and optimization criteria that prefer solutions with minimal divergence from annotators. The representation involves an equivalence relation over a large number of elements. We use Answer Set Programming to describe two representations of the problem and four objective functions suitable for different datasets. We provide two structurally different real-world benchmark datasets based on the METU-Sabanci Turkish Treebank and we report our experiences in using the Gringo, Clasp, and Wasp tools for computing optimal adjudication results on these datasets. * This work extends prior work [Sch17] with a semi-automatic adjudication encoding, extended formal descriptions and discussions, a tool description, and several additional examples. This is a preprint of [Sch18].Tuggener [Tug14] compares the accuracy of coreference resolution systems when used as preprocessing for discourse analysis, summarization, and finding entity contexts.Adjudication is the task of combining mention and chain information from several human annotators into one single gold standard corpus. These annotations are often mutually conflicting and resolving these conflicts is a task that is global on the document level, i.e., it is not possible to decide the truth of the annotation of one token, mention, or chain, without considering other tokens, mentions, and chains in the same document.We here present results and experiences obtained in a two-year project for creating a Turkish coreference corpus [Sch+17], which included an effort for developing and improving a (semi)automatic solution for coreference adjudication. We produced two datasets which are assembled from 475 individual annotations of 33 distinct documents from the METU-Sabanci Turkish Treebank [Say+04]. Adjudicating documents manually with tools such as BART [Ver+08] is usually done on a small set of annotations, for example, coreference annotations in the OntoNotes corpus [Pra+07] were created by at most two independent annotators per document. In the Turkish corpus, we needed to adjudicate between eight and twelve independent coreference annotations. Given such a high number of annotations, it is suggestive to use majorities of annotator decisions for suggesting an adjudication solution to the human annotator.In this paper, we describe a (semi-)automatic solution for supporting adjudication of coreference annotations based on Answer Set Programming (ASP) [Bar04; Lif08; BET11; Geb+12]. ASP is a logic programming and knowledge representation paradigm that allows for a declarative specification of problems and is suitable for solving large-scale combinatorial optimization problems.Our contributions are as follows.