Studies of political polarization in social media demonstrate mixed evidence for whether discussions necessarily evolve into left and right ideological echo chambers. Recent research shows that, for political and issue-based discussions, patterns of user clusterization may differ significantly, but that cross-cultural evidence of the polarization of users on certain issues is close to non-existent. Furthermore, most of the studies developed network proxies to detect users’ grouping, rarely taking into account the content of the Tweets themselves. Our contribution to this scholarly discussion is founded upon the detection of polarization based on attitudes towards political actors expressed by users in Germany, the USA and Russia within discussions on inter-ethnic conflicts. For this exploratory study, we develop a mixed-method approach to detecting user grouping that includes: crawling for data collection; expert coding of Tweets; user clusterization based on user attitudes; construction of word frequency vocabularies; and graph visualization. Our results show that, in all the three cases, the groups detected are far from being conventionally left or right, but rather that their views combine anti-institutionalism, nationalism, and pro- and anti-minority views in varying degrees. In addition to this, more than two threads of political debate may co-exist in the same discussion. Thus, we show that the debate that sees Twitter as either a platform of ‘echo chambering’ or ‘opinion crossroads’ may be misleading. In our opinion, the role of local political context in shaping (and explaining) user clusterization should not be under-estimated.
The paper is dedicated to solving the problem of optimal text classification in the areaof automated detection of typology of texts. In conventional approaches to topicality-based textclassification (including topic modeling), the number of clusters is to be set up by the scholar, andthe optimal number of clusters, as well as the quality of the model that designates proximity oftexts to each other, remain unresolved questions. We propose a novel approach to the automateddefinition of the optimal number of clusters that also incorporates an assessment of word proximityof texts, combined with text encoding model that is based on the system of sentence embeddings.Our approach combines Universal Sentence Encoder (USE) data pre-processing, agglomerativehierarchical clustering by Ward’s method, and the Markov stopping moment for optimal clustering.The preferred number of clusters is determined based on the “e-2” hypothesis. We set up anexperiment on two datasets of real-world labeled data: News20 and BBC. The proposed model istested against more traditional text representation methods, like bag-of-words and word2vec, to showthat it provides a much better-resulting quality than the baseline DBSCAN and OPTICS models withdifferent encoding methods. We use three quality metrics to demonstrate that clustering quality doesnot drop when the number of clusters grows. Thus, we get close to the convergence of text clusteringand text classification.
Today, aggressive verbal behavior is generally perceived as a threat to integrity and democratic quality of public discussions, including those online. However, we argue that, in more restrictive political regimes, communicative aggression may play constructive roles in both discussion dynamics and empowerment of political groups. This might be especially true for restrictive political and legal environments like Russia, where obscene speech is prohibited by law in registered media and the political environment does not give much space for voicing discontent. Taking Russian YouTube as an example, we explore the roles of two under-researched types of communicative aggression—obscene speech and politically motivated hate speech—within the publics of video commenters. For that, we use the case of the Moscow protests of 2019 against non-admission of independent and oppositional candidates to run for the Moscow city parliament. The sample of over 77,000 comments for 13 videos of more than 100,000 views has undergone pre-processing and vocabulary-based detection of aggression. To assess the impact of hate speech upon the dynamics of the discussions, we have used Granger tests and assessment of discussion histograms; we have also assessed the selected groups of posts in an exploratory manner. Our findings demonstrate that communicative aggression helps to express immediate support and solidarity. It also contextualizes the criticism towards both the authorities and regime challengers, as well as demarcates the counter-public.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.