The two-alternative multidimensional forced-choice measurement of personality has attracted researchers' attention for its tolerance to response bias. Moreover, the response time can be collected along with the item response when personality measurement is conducted with computers. In view of this situation, the objective of this study is to propose a Thurstonian D-diffusion item response theory (IRT) model, which combines two key existing frameworks: the Thurstonian IRT model for forced-choice measurement and the D-diffusion IRT model for the response time in personality measurement. The proposed model reflects the psychological theories behind the data-generating mechanism of the item response and response time. A simulation study reveals that the proposed model can successfully recover the parameters and factor structure in typical application settings. A real data application reveals that the proposed model estimates similar but still different parameter values compared to the original Thurstonian IRT model, and this difference can be explained by the response time information. In addition, the proposed model successfully reflects the distance-difficulty relationship between the response time and the latent relative respondent position. Keywords Item response theory • Response time • Diffusion model • Thurstonian IRTThe Likert scale (Likert, 1932) is widely used to measure respondents' psychological characteristics. However, it is also known that the data obtained from a Likert scale tend to be affected by response biases (for more details and examples,
On the basis of a combination of linear ballistic accumulation (LBA) and item response theory (IRT), this paper proposes a new class of item response models, namely LBA IRT, which incorporates the observed response time by means of LBA. Our main objective is to develop a simple yet effective alternative to the diffusion IRT model, which is one of best-known response time (RT)-incorporating IRT models that explicitly models the underlying psychological process of the elicited item response. Through a simulation study, we show that the proposed model enables us to obtain the corresponding parameter estimates compared with the diffusion IRT model while achieving a much faster convergence speed. Furthermore, the application of the proposed model to real personality measurement data indicates that it fits the data better than the diffusion IRT model in terms of its predictive performance. Thus, the proposed model exhibits good performance and promising modeling capabilities in terms of capturing the cognitive and psychometric processes underlying the observed data.
Background One of the reasons why students go to counseling is being called on based on self-reported health survey results. However, there is no concordant standard for such calls. Objective This study aims to develop a machine learning (ML) model to predict students’ mental health problems in 1 year and the following year using the health survey’s content and answering time (response time, response time stamp, and answer date). Methods Data were obtained from the responses of 3561 (62.58%) of 5690 undergraduate students from University A in Japan (a national university) who completed the health survey in 2020 and 2021. We performed 2 analyses; in analysis 1, a mental health problem in 2020 was predicted from demographics, answers for the health survey, and answering time in the same year, and in analysis 2, a mental health problem in 2021 was predicted from the same input variables as in analysis 1. We compared the results from different ML models, such as logistic regression, elastic net, random forest, XGBoost, and LightGBM. The results with and without answering time conditions were compared using the adopted model. Results On the basis of the comparison of the models, we adopted the LightGBM model. In this model, both analyses and conditions achieved adequate performance (eg, Matthews correlation coefficient [MCC] of with answering time condition in analysis 1 was 0.970 and MCC of without answering time condition in analysis 1 was 0.976; MCC of with answering time condition in analysis 2 was 0.986 and that of without answering time condition in analysis 2 was 0.971). In both analyses and in both conditions, the response to the questions about campus life (eg, anxiety and future) had the highest impact (Gain 0.131-0.216; Shapley additive explanations 0.018-0.028). Shapley additive explanations of 5 to 6 input variables from questions about campus life were included in the top 10. In contrast to our expectation, the inclusion of answering time–related variables did not exhibit substantial improvement in the prediction of students’ mental health problems. However, certain variables generated based on the answering time are apparently helpful in improving the prediction and affecting the prediction probability. Conclusions These results demonstrate the possibility of predicting mental health across years using health survey data. Demographic and behavioral data, including answering time, were effective as well as self-rating items. This model demonstrates the possibility of synergistically using the characteristics of health surveys and advantages of ML. These findings can improve health survey items and calling criteria.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.