This article provides a taxonomy of, nomenclature for, and discussion of issues related to collaborative writing. The goal is to enhance its research, improve its application in academia and industry, and help produce technologies that better support collaborative writing. To write collaboratively and build supportive technologies, practitioners and academics need to use a consistent nomenclature and taxonomy of collaborative writing. This article defines key collaborative writing terms and builds a taxonomy, including collaborative writing activities, strategies, control modes, work modes, and roles. This article stresses that effective choices in group awareness, participation, and coordination are critical to successful collaborative writing outcomes, and that these outcomes may be promoted through collaborative writing software, chat software, face-to-face meetings, and group processes.
Many argue that the Information Systems (IS) field is at a critical juncture in its evolving identity. In debating whether the IS field is in crisis, we agree with Hirschheim and Klein (2003) that "reflective analysis" will contribute to the field's continued prosperity. Indeed, reflective analysis is needed to evaluate the journals of the field as well as IS journal rankings, which evaluate the effectiveness and productivity of researchers and the effectiveness and productivity of journals in communicating research results. After all, where and how we publish are fundamental aspects of the identity of the IS field-reflecting our value systems, paradigms, cultural practices, reward systems, political hierarchy, and aspirations. This article reviews the results of the largest global, scientometric survey to date of IS journal rankings that targeted 8741 faculty from 414 IS departments worldwide , and resulted in 2559 responses, or a 32% response rate. Rather than using predetermined journal lists, the study required respondents to freely recall their top-four research journals.
R esearch in face-to-face teams shows conflicting results about the impact of behavioral controls on trust; some research shows that controls increase the salience of good behavior, which increases trust while other research shows that controls increase the salience of poor behavior that decreases trust. The only study in virtual teams, which examined poorly functioning teams, found that controls increased the salience of poor behavior, which decreased trust. We argue that in virtual teams behavioral controls amplify the salience of all behaviors (positive and negative) and that an individual's selective perception bias influences how these behaviors are interpreted. Thus the link from behavioral controls to trust is more complex than first thought. We conducted a 2 × 2 experiment, varying the use of behavioral controls (controls, no controls) and individual team member behaviors (reneging behaviors designed to reduce trust beliefs and fulfilling behaviors designed to increase trust beliefs). We found that behavioral controls did amplify the salience of all behaviors; however, contrary to what we expected, this actually weakened the impact of reneging and fulfilling behaviors on trust. We believe that completing a formal evaluation increased empathy and the awareness of context in which the behaviors occurred and thus mitigated extreme perceptions. We also found that behavioral controls increased the selective perception bias which induced participants to see the behaviors their disposition to trust expected rather than the behaviors that actually occurred.
Interest in the repertory grid technique has been growing in the IS field. This article seeks to inform the reader on the proper use and application of the technique in IS research. The methodology has unique advantages that make it suitable for many research settings. In this tutorial, we describe the technique, its theoretical underpinnings, and how it may be used by IS researchers. We conclude by detailing many IS research opportunities that exist in respect to the repertory grid technique.
The open-ended responses to the justification question for all participants within each team were aggregated into a single text file and analyzed using the Automap software (Carley et al. 2006). This software provides a semi-automated means of identifying the concepts within a body of text and the relationships (based on proximity) among those concepts. Within the aggregated text file, we grouped each participant's response to the justification question into one paragraph and began each response with the participant identifier (e.g.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.