Depression is the most common mental illness in the US, with 6.7% of all adults experiencing a major depressive episode. Unfortunately, depression extends to teens and young users as well and researchers have observed an increasing rate in recent years (from 8.7% in 2005 to 11.3% in 2014 in adolescents and from 8.8 to 9.6% in young adults), especially among girls and women. People themselves are a barrier to fighting this disease as they tend to hide their symptoms and do not receive treatments. However, protected by anonymity, they share their sentiments on the Web, looking for help. In this paper, we address the problem of detecting depressed users in online forums. We analyze user behavior in the ReachOut.com online forum, a platform providing a supportive environment for young people to discuss their everyday issues, including depression. We propose an unsupervised technique based on recurrent neural networks and anomaly detection to detect depressed users. We examine the linguistic style of user posts in combination with network-based features modeling how users connect in the forum. Our results on detecting depressed users show that both psycho-linguistic features derived from user posts and network features are good predictors of users facing depression. Moreover, by combining these two sets of features, we can achieve an F1-measure of 0.64 and perform better than baselines.
As news is increasingly spread through social media platforms, the problem of identifying misleading or false information (colloquially called "fake news'') has come into sharp focus. There are many factors which may help users judge the accuracy of news articles, ranging from the text itself to meta-data like the headline, an image, or the bias of the originating source. In this research, participants (\textitn = 175) of various political ideological leaning categorized news articles as real or fake based on either article text or meta-data. We used a mixed methods approach to investigate how various article elements (news title, image, source bias, and excerpt) impact users' accuracy in identifying real and fake news. We also compared human performance to automated detection based on the same article elements and found that automated techniques were more accurate than our human sample while in both cases the best performance came not from the article text itself but when focusing on some elements of meta-data. Adding the source bias does not help humans, but does help computer automated detectors. Open-ended responses suggested that the image in particular may be a salient element for humans detecting fake news.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.