BACKGROUND
The Large Language Model and Artificial Intelligence Enabling and Self-Regulation tool (LLMAI-ESR 32) is specifically designed for medical and health sciences students.
OBJECTIVE
This study develops and psychometrically validates the tool to assess students’ ethical perspectives, usage behaviors, and understanding of AI-enabled large language models in medical education.
METHODS
The LLMAI-ESR 32 was developed through a systematic review and expert validation to ensure the accuracy and relevance of the included items. Psychometric testing was conducted to evaluate its reliability and validity.
RESULTS
The tool demonstrates high internal consistency, with an overall Cronbach’s alpha of α = 0.934, indicating strong reliability. Significant relationships were found between the assessed variables, highlighting the tool’s ability to capture knowledge, usage, and ethical considerations. The connections between different AI tools and their usage further support the tool’s effectiveness in evaluating self-regulation and enabling variables.
CONCLUSIONS
The LLMAI-ESR 32 is a valid, robust, and reliable psychometric measure, making it a useful tool for evaluating the integration of large language models and AI in medical and applied health science education.
CLINICALTRIAL
NA