Given the huge impact that Online Social Networks (OSN) had in the way people get informed and form their opinion, they became an attractive playground for malicious entities that want to spread misinformation, and leverage their effect. In fact, misinformation easily spreads on OSN and is a huge threat for modern society, possibly influencing also the outcome of elections, or even putting people's life at risk (e.g., spreading "anti-vaccines" misinformation). Therefore, it is of paramount importance for our society to have some sort of "validation" on information spreading through OSN. The need for a wide-scale validation would greatly benefit from automatic tools. In this paper, we show that it is difficult to carry out an automatic classification of misinformation considering only structural properties of content propagation cascades. We focus on structural properties, because they would be inherently difficult to be manipulated, with the the aim of circumventing classification systems. To support our claim, we carry out an extensive evaluation on Facebook posts belonging to conspiracy theories (as representative of misinformation), and scientific news (representative of fact-checked content). Our findings show that conspiracy content actually reverberates in a way which is hard to distinguish from the one scientific content does: for the classification mechanisms we investigated, classification F1-score never exceeds 0.65 during content propagation stages, and is still less than 0.7 even after propagation is complete.
In this paper, we present findings from a largescale and long-term phishing experiment that we conducted in collaboration with a partner company. Our experiment ran for 15 months during which time more than 14,000 study participants (employees of the company) received different simulated phishing emails in their normal working context. We also deployed a reporting button to the company's email client which allowed the participants to report suspicious emails they received. We measured click rates for phishing emails, dangerous actions such as submitting credentials, and reported suspicious emails.The results of our experiment provide three types of contributions. First, some of our findings support previous literature with improved ecological validity. One example of such results is good effectiveness of warnings on emails. Second, some of our results contradict prior literature and common industry practices. Surprisingly, we find that embedded training during simulated phishing exercises, as commonly deployed in the industry today, does not make employees more resilient to phishing, but instead it can have unexpected side effects that can make employees even more susceptible to phishing. And third, we report new findings. In particular, we are the first to demonstrate that using the employees as a collective phishing detection mechanism is practical in large organizations. Our results show that such crowd-sourcing allows fast detection of new phishing campaigns, the operational load for the organization is acceptable, and the employees remain active over long periods of time.
The Command and Control (C&C) channel of modern botnets is migrating from traditional centralized solutions (such as the ones based on Internet Relay Chat and Hyper Text Transfer Protocol), towards new decentralized approaches. As an example, in order to conceal their traffic and avoid blacklisting mechanisms, recent C&C channels use peer-to-peer networks or abuse popular Online Social Networks (OSNs). A key reason for this paradigm shift is that current detection systems become quite effective in detecting centralized C&C. In this paper we propose ELISA (Elusive Social Army), a botnet that conceals C&C information using OSNs accounts of unaware users. In particular, ELISA exploits in a opportunistic way the messages that users exchange through the OSN. Furthermore, we provide our prototype implementation of ELISA. We show that several popular social networks can be maliciously exploited to run this type of botnet, and we discuss why current traffic analysis systems cannot detect ELISA. Finally, we run a thorough set of experiments that confirm the feasibility of our proposal. We have no evidence of any real-world botnets that use our technique to create C&C channels. However, we believe that finding out in advance potential new types of botnets will help to prevent possible future malevolent applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.