A B S T R A C TResearch on social influence shows that different patterns take place when this phenomenon happens within computer-mediated-communication (CMC), if compared to face-to-face interaction. Informational social influence can still easily take place also by means of CMC, however normative influence seems to be more affected by the environmental characteristics. Different authors have theorized that deindividuation nullifies the effects of normative influence, but the Social Identity Model of Deindividuation Effects theorizes that users will conform even when deindividuated, but only if social identity is made salient.The two typologies of social influence have never been studied in comparison, therefore in our work, we decided to create an online experiment to observe how the same variables affect them, and in particular how deindividuation works in both cases. The 181 experimental subjects that took part, performed 3 tasks: one aiming to elicit normative influence, and two semantic tasks created to test informational influence. Entropy has been used as a mathematical assessment of information availability.Our results show that normative influence becomes almost ineffective within CMC (1.4% of conformity) when subjects are deindividuated.Informational influence is generally more effective than normative influence within CMC (15-29% of conformity), but similarly to normative influence, it is inhibited by deindividuation.
Social media platforms are often implicated in the spread of misinformation for encouraging the behaviour of rapid-sharing without adequate mechanisms for verifying information. To counter this phenomena, much related research in computer science has been focusing on developing tools to detect misinformation, to rank fact-check-worthy claims, and to understand their spread patterns, while psychosocial approaches have been focused on understanding information literacy, ideology and partisanship. In this paper, we demonstrate through a survey of nearly 100 people that the Human Values could have a significant influence on the way people perceive and share information. We argue that integrating a valuesoriented perspective into computational approaches for handling misinformation could encourage misinformation prevention, and assist in predicting and ranking misinformation.
The stranger on the Internet effect has been studied in relation to self-disclosure. Nonetheless, quantitative evidence about how people mentally represent and perceive strangers online is still missing. Given the dynamic development of web technologies, quantifying how much strangers can be considered suitable for pro-social acts such as self-disclosure appears fundamental for a whole series of phenomena ranging from privacy protection to fake news spreading. Using a modified and online version of the Ultimatum Game (UG), we quantified the mental representation of the stranger on the Internet effect and tested if people modify their behaviors according to the interactors’ identifiability (i.e., reputation). A total of 444 adolescents took part in a 2 × 2 design experiment where reputation was set active or not for the two traditional UG tasks. We discovered that, when matched with strangers, people donate the same amount of money as if the other has a good reputation. Moreover, reputation significantly affected the donation size, the acceptance rate and the feedback decision making as well.
Social media have created communication channels between citizens and policymakers but are also susceptible to rampant misinformation. This new context demands new social media policies that can aid policymakers in making evidence-based decisions for combating misinformation online. This paper reports on data collected from policymakers in Austria, Greece, and Sweden, using focus groups and in-depth interviews. Analyses provide insights into challenges and identify four important themes for supporting policy-making for combating misinformation: a) creating a trusted network of experts and collaborators, b) facilitating the validation of online information, c) providing access to visualisations of data at different levels of granularity, and d) increasing the transparency and explainability of flagged misinformative content. These recommendations have implications for rethinking how revised social media policies can contribute to evidence-based decision-making. Issue 4This paper is part of Trust in the system, a special issue of Internet Policy Review guestedited by Péter Mezei and Andreea Verteş-Olteanu.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.