In order to assess the impact of culture on state behavior in international crises, specifically with regard to mediation and its outcome, this study tests hypotheses rooted in both the international relations and the cross-cultural psychology literatures, implementing analysis at both the international-system level and the domestic-state-actor level. At the international system level, the study finds that cultural difference between adversaries affects whether or not mediation occurs during an international crisis but has no effect on tension reduction. At the domestic state actor level, we find that there are certain facets of cultural identity that make a state more or less open to requesting or accepting third-party mediation during an international crisis, but that these facets have no effect on tension reduction.
This paper assesses the comparative opportunities and limitations of ‘new’ and ‘old’ data sources for early warning, crisis response and violence research by comparing reports of political violence, and both violent and peaceful demonstrations, produced through social media and traditional media during the Kenyan elections in August and October 2017. We leverage data from a sample of social media reports of violence through public posts to Twitter and compare these with events coded from media and published sources by the Armed Conflict Location & Event Data Project (ACLED) along two dimensions: 1) geography of violence; and 2) temporality of reporting. We find that the profile of violence recorded varies significantly by source. Records from Twitter are more geographically concentrated, particularly in the capital city and wealthier areas. They are timelier in the immediate period surrounding elections. Records from ACLED have a wider geographic reach, and are relatively more numerous than Twitter in rural and less wealthy areas. They are timelier and more consistent in the run-up to and following elections. While neither source can reveal the ‘true’ violence that occurred, the findings point to the value of drawing on a constellation of various source types given their complementary advantages.
Conflict event datasets are used widely in academic, policymaking, and public spheres. Accounting for political violence across the world requires detailing conflict types, agents, characteristics, and source information. The public and policymaking communities may underestimate the impact of data collection decisions across global, real-time conflict event datasets. Here, we consider four widely used public datasets with global coverage and demonstrate how they differ by definitions of conflict, and which aspects of the information-sourcing processes they prioritize. First, we identify considerable disparities between automated conflict coding projects and researcher-led projects, largely resulting from few inclusion barriers and no data oversight. Second, we compare researcher-led datasets in greater detail. At the crux of their differences is whether a dataset prioritizes and mandates internal reliability by imposing initial conflict definitions on present events, or whether a dataset’s agenda is to capture an externally valid and comprehensive assessment of present violence patterns. Prioritizing reliability privileges specific forms of violence, despite the possibility that other forms actually occur; and leads to reliance on international and English-language information sources. Privileging validity requires a wide definition of political violence forms, and requires diverse, multi-lingual, and local sources. These conceptual, coding, and sourcing variations have significant implications for the use of these data in academic analysis and for practitioner responses to crisis and instability. These foundational differences mean that answers to “which country is most violent?”; “where are civilians most at risk?”; and “is the frequency of conflict increasing or decreasing?” vary according to datasets all purporting to capture the same phenomena of political violence.
With increased availability of disaggregated conflict event data for analysis, there are new and old concerns about bias. All data have biases, which we define as an inclination, prejudice, or directionality to information. In conflict data, there are often perceptions of damaging bias, and skepticism can emanate from several areas, including confidence in whether data collection procedures create systematic omissions, inflations, or misrepresentations. As curators and analysts of large, popular data projects, we are uniquely aware of biases that are present when collecting and using event data. We contend that it is necessary to advance an open and honest discussion about the responsibilities of all stakeholders in the data ecosystem – collectors, researchers, and those interpreting and applying findings – to thoughtfully and transparently reflect on those biases; use data in good faith; and acknowledge limitations. We therefore posit an agenda for data responsibility considering its collection and critical interpretation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.