In this paper, we propose a model-based method to study conditional dependence be- tween response accuracy and response time (RT) with the diffusion IRT model. To this end, we extend the previously proposed model by introducing variability across persons and items in cognitive capacity and in the initial bias of the response processes. We show that the extended model can explain the behavioral patterns of conditional dependency found in the previous studies in psychometrics. The first variability component in cognitive capacity can predict positive and negative conditional dependency and their interaction with the item difficulty. The second variability in the initial bias can account for the early changes in the response accuracy as a function of RTs given the person and item effects, producing the curvilinear conditional accuracy functions. We also provide a simulation study to validate the parameter recovery of the proposed model and two empirical applications to describe how to implement the model to study conditional dependency underlying data response accuracy and RTs.
ObjectiveThe objective of this work is to obtain validity evidence for an evaluation instrument used to assess the performance level of a mastoidectomy. The instrument has been previously described and had been formulated by a multi-institutional consortium. DesignMastoidectomies were performed on a virtual temporal bone system and then rated by experts using a previously described 15 element task-based checklist. Based on the results, a second, similar checklist was created and a second round of rating was performed. SettingTwelve otolaryngological surgical training programs in the United States. Participants65 mastoidectomy performances were evaluated coming from 37 individuals with a variety of temporal bone dissection experience, from medical students to attending physicians. Raters were attending surgeons from 12 different institutions. ResultsIntraclass correlation (ICC) scores varied greatly between items in the checklist with some being low and some being high. Percentage agreement scores were similar to previous rating instruments. There is strong evidence that a high score on the task-based checklist is necessary for a rater to consider a mastoidectomy to be performed at the level of an expert but a high score is not a sufficient condition. Rewording of the instrument items to focus on safety does not result in increased reliability of the instrument. The strong result of the Necessary Condition Analysis suggests that going beyond simple correlation measures can give extra insight into grading results. Additionally, we suggest using a multiple point scale instead of a binary pass/fail question combined with descriptive mastery levels. Conclusions
We study intelligence processes using a diffusion IRT model with random variability in cognitive model parameters: variability in drift rate (the trend of information accumulation toward a correct or incorrect response) and variability in starting point (from where the information accumulation starts). The random variation concerns randomness across person-item pairs and cannot be accounted for by individual and inter-item differences. Interestingly, the models explain the conditional dependencies between response accuracy and response time that are found in previous studies on cognitive ability tests, leading us to the formulation of a randomness perspective on intelligence processes. For an empirical test, we have analyzed verbal analogies data and matrix reasoning data using diffusion IRT models with different variability assumptions. The results indicate that 1) models with random variability fit better than models without, with implications for the conditional dependencies in both types of tasks; 2) for verbal analogies, random variation in drift rate seems to exist, which can be explained by person-by-item word knowledge differences; and 3) for both types of tasks, the starting point variation was also established, in line with the inductive nature of the tasks, requiring a sequential hypothesis testing process. Finally, the correlation of individual differences in drift rate and SAT suggests a meta-strategic choice of respondents to focus on accuracy rather than speed when they have a higher cognitive capacity and when the task is one for which investing in time pays off. This seems primarily the case for matrix reasoning and less so for verbal analogies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.