Social media (e.g., Twitter) has been an extremely popular tool for public health surveillance. The novel coronavirus disease 2019 (COVID-19) is the first pandemic experienced by a world connected through the internet. We analyzed 105+ million tweets collected between March 1 and May 15, 2020, and Weibo messages compiled between January 20 and May 15, 2020, covering six languages (English, Spanish, Arabic, French, Italian, and Chinese) and represented an estimated 2.4 billion citizens worldwide. To examine fine-grained emotions during a pandemic, we built machine learning classification models based on deep learning language models to identify emotions in social media conversations about COVID-19, including positive expressions (optimistic, thankful, and empathetic), negative expressions (pessimistic, anxious, sad, annoyed, and denial), and a complicated expression, joking, which has not been explored before. Our analysis indicates a rapid increase and a slow decline in the volume of social media conversations regarding the pandemic in all six languages. The upsurge was triggered by a combination of economic collapse and confinement measures across the regions to which all the six languages belonged except for Chinese, where only the latter drove conversations. Tweets in all analyzed languages conveyed remarkably similar emotional states as the epidemic was elevated to pandemic status, including feelings dominated by a mixture of joking with anxious/pessimistic/annoyed as the volume of conversation surged and shifted to a general increase in positive states (optimistic, thankful, and empathetic), the strongest being expressed in Arabic tweets, as the pandemic came under control.
Are Federated Learning (FL) systems free from backdoor poisoning with the arsenal of various defense strategies deployed? This is an intriguing problem with significant practical implications regarding the utility of FL services. Despite the recent flourish of poisoning-resilient FL methods, our study shows that carefully tuning the collusion between malicious participants can minimize the trigger-induced bias of the poisoned local model from the poison-free one, which plays the key role in delivering stealthy backdoor attacks and circumventing a wide spectrum of state-of-the-art defense methods in FL. In our work, we instantiate the attack strategy by proposing a distributed backdoor attack method, namely Cerberus Poisoning (CerP). It jointly tunes the backdoor trigger and controls the poisoned model changes on each malicious participant to achieve a stealthy yet successful backdoor attack against a wide spectrum of defensive mechanisms of federated learning techniques. Our extensive study on 3 large-scale benchmark datasets and 13 mainstream defensive mechanisms confirms that Cerberus Poisoning raises a significantly severe threat to the integrity and security of federated learning practices, regardless of the flourish of robust Federated Learning methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.