The aims of this study were to design a mobile app that would record daily self-reported Korean version of the Center for Epidemiologic Studies Depression Scale-Revised (K-CESD-R) ratings in a “Yes” or “No” format, develop two different algorithms for converting mobile K-CESD-R scores in a binary format into scores in a 5-point response format, and determine which algorithm would be more appropriately applied to the newly developed app. Algorithm (A) was designed to improve the scoring system of the 2-week delayed retrospective recall-based original K-CESD-R scale, and algorithm (B) was designed to further refine the scoring of the 24-hour delayed prospective recall-based mobile K-CESD-R scale applied with algorithm (A). To calculate total mobile K-CESD-R scores, each algorithm applied certain cut-off criteria for a 5-point scale with different inter-point intervals, defined by the ratio of the total number of times that users responded “Yes” to each item to the number of days that users reported daily depressive symptom ratings during the 2-week study period. Twenty participants were asked to complete a K-CESD-R Mobile assessment daily for 2 weeks and an original K-CESD-R assessment delivered to their e-mail accounts at the end of the 2-week study period. There was a significant difference between original and mobile algorithm (B) scores but not between original and mobile algorithm (A) scores. Of the 20 participants, 4 scored at or above the cut-off criterion (≥13) on either the original K-CESD-R (n = 4) or the mobile K-CESD-R converted with algorithm (A) (n = 3) or algorithm (B) (n = 1). However, all participants were assessed as being below threshold for a diagnosis of a mental disorder during a clinician-administered diagnostic interview. Therefore, the K-CESD-R Mobile app using algorithm (B) could be a more potential candidate for a depression screening tool than the K-CESD-R Mobile app using algorithm (A).
The purpose of this study was to investigate the methods of estimating the reliability of school-level scores using generalizability theory and multilevel models. Two approaches, 'student within schools' and 'students within schools and subject areas,' were conceptualized and implemented in this study. Four methods resulting from the combination of these two approaches with generalizability theory and multilevel models were compared for both balanced and unbalanced data. The generalizability theory and multilevel models for the 'students within schools' approach produced the same variance components and reliability estimates for the balanced data, while failing to do so for the unbalanced data. The different results from the two models can be explained by the fact that they administer different procedures in estimating the variance components used, in turn, to estimate reliability. Among the estimation methods investigated in this study, the generalizability theory model with the 'students nested within schools crossed with subject areas' design produced the lowest reliability estimates. Fully nested designs such as (students:schools) or (subject areas:students:schools) would not have any significant impact on reliability estimates of school-level scores. Both methods provide very similar reliability estimates of school-level scores.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.