The COVID-19 pandemic presents a significant challenge to wellbeing for people around the world. Here, we examine which individual and societal factors can predict the extent to which individuals suffer or thrive during the COVID-19 outbreak, with survey data collected from 26,684 participants in 51 countries from 17 April to 15 May 2020. We show that wellbeing is linked to an individual's recent experiences of specific momentary positive and negative emotions, including love, calm, determination, and loneliness. Higher socioeconomic status was associated with better wellbeing. The present study provides a rich map of emotional experiences and wellbeing around the world during the COVID-19 outbreak, and points to calm, connection, and control as central to our wellbeing at this time of collective crisis.
Online Social Media (OSM) in general and more specifically micro-blogging site Twitter has outpaced the conventional news dissemination systems. It is often observed that news stories are first broken in Twitter space and then the electronic and print media take them up. However, the distributed structure and lack of moderation in Twitter compounded with the temptation of posting a news worthy story early on Twitter, makes the veracity of information (tweet) a major issue. Our work is an attempt to solve this problem by providing a approach to detect misinformation/rumors on Twitter in real-time automatically. We define a rumor as any information which is circulating in Twitter space and is not in agreement with the information from a credible source. For establishing credibility, our approach is based on the premise that verified News Channel accounts on Twitter would furnish more credible information as compared to the naive unverified account of user (public at large). Our approach has four key steps. Firstly, we extract live streaming tweets corresponding to Twitter trends, identify topics being talked about in each trend based on clustering using hashtags and then collect tweets for each topic. Secondly, we segregate the tweets for each topic based on whether its tweeter is a verified news channel or a general user. Thirdly, we calculate and compare the contextual and sentiment mismatch between tweets comprising of the same topic from verified Twitter accounts of News Channels and other unverified (general) users using semantic and sentiment analysis of the tweets. Lastly, we label the topic as a rumor based on the value of mismatch ratio, which reflects the degree of discrepancy between the news and public on that topic. Results show that a large amount of topics can be flagged as suspicious using this approach without involvement of any manual inspection. In order to validate our proposed algorithm, we implement a prototype called The Twitter Grapevine which targets rumor detection in the Indian domain. The prototype shows how a user can leverage this implementation to monitor the detected rumors using activity timeline, maps and tweet feed. User can also report the rumor as incorrect which can then be updated after manual inspection.
Abstract-YouTube draws large number of users who contribute actively by uploading videos or commenting on existing videos. However, being a crowd sourced and large content pushed onto it, there is limited control over the content. This makes malicious users push content (videos and comments) which is inappropriate (unsafe), particularly when such content is placed around cartoon videos which are typically watched by kids. In this paper, we focus on presence of unsafe content for children and users who promote it. For detection of child unsafe content and its promoters, we perform two approaches, one based on supervised classification which uses an extensive set of videolevel, user-level and comment-level features and another based Convolutional Neural Network using video frames. Detection accuracy of 85.7% is achieved which can be leveraged to build a system to provide a safe YouTube experience for kids. Through detailed characterization studies, we are able to successfully conclude that unsafe content promoters are less popular and engage less as compared with other users. Finally, using a network of unsafe content promoters and other users based on their engagements (likes, subscription and playlist addition) and other factors, we find that unsafe content is present very close to safe content and unsafe content promoters form very close knit communities with other users, thereby further increasing the likelihood of a child getting getting exposed to unsafe content.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.