BACKGROUND
Cognitive assessment is an important component of applied psychology, but limited access and high costs make these evaluations challenging.
OBJECTIVE
This pilot study examined the feasibility of using large language models (LLMs) to create personalized AI-based verbal comprehension tests (AI-BVCTs) for assessing verbal intelligence, in contrast with traditional assessment methods based on standardized norms.
METHODS
We used a within-subject design, comparing scores obtained from AI-BVCTs with those from the Wechsler Adult Intelligence Scale (WAIS-III) Verbal Comprehension Index (VCI).
RESULTS
The concordance correlation coefficient (CCC) demonstrated strong agreement between AI-BVCT and VCI scores (Claude: CCC = .752, 90% CI [.266, .933]; GPT-4: CCC = .733, 90% CI [.170, .935]). Pearson correlations further supported these findings, showing strong associations between VCI and AI-BVCT scores (Claude: r = .844, p < .001; GPT-4: r = .771, p = .025). No statistically significant differences were found between AI-BVCT and VCI scores (p > .05). These findings support the potential of LLMs to assess verbal intelligence.
CONCLUSIONS
The study attests to the promise of AI-based cognitive tests in increasing the accessibility and affordability of assessment processes, enabling personalized testing. The research also raises ethical concerns regarding privacy and over-reliance on AI in clinical work. Further research with larger and more diverse samples is needed to establish the validity and reliability of this approach and develop more accurate scoring procedures.