BACKGROUND Depression is a kind of serious personal and public mental health problem nowadays. Self-report is the main method used to test whether a person is of depression and the severity of depression. However, it is not easy to discover patients with depression as they feel shame to disclose or discuss their mental health conditions with others. Moreover, self-report is time-consuming, and usually leads to miss a certain number of cases. Therefore, automatic discovering patients with depression from other sources, such as social media, attracts more and more attention. Social media as one of the most important daily communication systems connects a large quantities of people including depression patients, and provides one channel to discover depression patients. In this paper, we investigate deep learning methods on depression risk prediction for Chinese microblogs, which has potential to discover patients with depression and to trace their mental health conditions. OBJECTIVE The aim of this study is to explore the potential of some state-of-the-art deep learning methods on depression risk prediction for Chinese microblogs. METHODS Deep learning methods with pretrained language representation models, including BERT, RoBERTa and XLNET, are investigated for depression risk prediction, and are compared with previous methods on an manually annotated benchmark dataset with depression risks at four levels from 0 to 3, where 0, 1, 2 and 3 denote no inclination, mild, moderate and severe, respectively. The dataset is collected from Weibo. We also compare the different deep learning methods with pretrained language representation models in two settings: 1) publicly released pretrained language representation models, and 2) language representation models further pretrained on a large-scale unlabeled dataset collected from Weibo. Precision, recall and F1 score are used as our performance evaluation measures. RESULTS Among the three deep learning methods, BERT achieves the best performance with a micro-averaged F1 score of 0.856. RoBERTa achieves the best performance with a macro-averaged F1 score of 0.424 on depression risks at level 1, 2 and 3, which is new benchmark result on the dataset. The further pretrained language representation models bring improvement over the publicly released ones. CONCLUSIONS In this study, we use deep learning methods with pretrained language representation models to predict depression risk for Chinese microblogs automatically. The experimental results show that the deep learning methods perform better than previous methods, and have greater potential to discover patients with depression and to trace their mental health conditions. CLINICALTRIAL
In the present paper, we consider a nonlinear fractional snap model with respect to a G-Caputo derivative and subject to non-periodic boundary conditions. Some qualitative analysis of the solution, such as existence and uniqueness, are investigated in view of fixed-point theorems. Moreover, the stabilities of Ulam–Hyers and Ulam–Hyers–Rassias criterions are considered and investigated. Some numerical simulations were performed using MATLAB for understanding the theoretical results. All results in this work play an important role in understanding ocean engineering phenomena due to the huge applicability of jerk and snap in seakeeping, ride comfort, and shock response spectrum.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.