People taking part in argumentative debates through collective annotations face a highly cognitive task when trying to estimate the group's global opinion. In order to reduce this effort, we propose in this paper to model such debates prior to evaluating their "social validation." Computing the degree of global confirmation (or refutation) enables the identification of consensual (or controversial) debates. Readers as well as prominent information systems may thus benefit from this information. The accuracy of the social validation measure was tested through an online study conducted with 121 participants. We compared their human perception of consensus in argumentative debates with the results of the three proposed social validation algorithms. Their efficiency in synthesizing opinions was demonstrated by the fact that they achieved an accuracy of up to 84%.
Introduction and MotivationsAnnotating paper documents is a common activity practiced since the early middle ages (Fraenkel & Klein, 1999). Field studies show that readers still make extensive use of annotations nowadays (Marshall, 1998;Wolfe & Neuwirth, 2001). Although seemingly insignificant, they actually facilitate key purposes such as "active reading" by supporting critical thinking while reading (Adler & van Doren, 1972), learning by facilitating document appropriation, and proofreading, among many others. With the widespread adoption of digital documents both in the workplace and at home, people have felt frustrated at not being able to annotate them (Sellen & Harper, 2003, p. 96). Such a need led both research labs and companies to design a plethora of annotation systems-mostly targeting Web documents-since the 1990s (Wolfe, 2002). At first, they implemented the usual paper-based annotation functions, mainly for personal use. Then, taking advantage of modern computer storage and networking capabilities, annotation systems provided novel features for collective use. In particular, they enabled annotation sharing through dedicated servers, so that users could view and access them in context, i.e., within the annotated document. Moreover, subsequent readers may later reply to any annotation or any reply in turn, thus forming a debate anchored to the discussed passage. Such a debate is also called a "discussion thread" (e.g., Figure 1).As an asynchronous way of communicating, collective annotations (i.e., annotations along with their discussion threads) are useful at two levels:• From the readers' point of view, collective annotations enable them to discuss document passages in context. This is an advantage in comparison with Internet bulletin boards and forums where one needs to explain the context of his/her post to be understood. Moreover, feedback about a given annotated document is directly accessible to readers, whereas it would be scattered over multiple sources otherwise. Besides being useful for readers, authors also benefit from collective annotations as they can improve their documents by taking into account the associated remarks.• For syste...