Global crises provide a fertile environment for the proliferation of disinformation, rumors, and conspiracy narratives. We investigate people's perceptions and beliefs related to COVID-19 in Romania, during the lockdown period (April 2020) and during the state of alert period (July 2020), by fielding two surveys with different modes of collection (CATI and web). Building on measures tested in other countries, we identify the public’s vulnerability to conspiracy narratives and its willingness to comply with public health guidance. Using Structural Equation Modeling, we check if individuals exhibiting pro-Russian or anti-Western attitudes believe more strongly in COVID-19 conspiracy narratives compared to the rest of the population. Then, we check if those believing conspiracy narratives are less susceptible to comply with public health recommendations.We find in both surveys that holding conspiracy beliefs is a mediator between distrusting Western actors and noncompliance with COVID-19 guidelines. Thus, pro-Russian and anti-EU, U.S. and NATO attitudes are linked to stronger conspiracy beliefs, which relate to lower levels of concern and knowledge regarding the virus, which in turn can reduce compliance with guidelines. This suggests that openness to anti-Western narratives may have behavioral consequences. These findings highlight the potential sources of unsafe behaviors during the pandemic and can inform official communication strategies meant to counter both disinformation and non-compliance with public health policies.
The authenticity of public debate is challenged by the emergence of networks of non-genuine users (such as political bots and trolls) employed and maintained by governments to influence public opinion. To tackle this issue, researchers have developed algorithms to automatically detect non-genuine users, but it is not clear how to identify relevant content, what features to use and how often to retrain classifiers. Users of online discussion boards who informally flag other users by calling them out as paid trolls provide potential labels of perceived propaganda in real time. Against this background, we test the performance of supervised machine learning models (regularized regression and random forests) to predict discussion board comments perceived as propaganda by users of a major Romanian online newspaper. Results show that precision and recall are relatively high and stable, and re-training the model on new labels does not improve prediction diagnostics. Overall, metadata (particularly a low comment rating) are more predictive of perceived propaganda than textual features. The method can be extended to monitor suspicious activity in other online environments, but the results should not be interpreted as detecting actual propaganda.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.