Context
In previous research, athletic trainers (ATs) have identified the formal coursework and clinical experiences of their professional program as contributing to preparedness for autonomous practice. However, new graduates have reported a perceived lack of skills necessary for practicing autonomously.
Objective
The purpose of this study was to evaluate how programs provide progressively autonomous clinical education experiences and the role of these experiences in preparing future ATs.
Design
Qualitative study.
Setting
Virtual interviews.
Patients or Other Participants
A sample of 17 program administrators (program directors = 12, coordinators of clinical education = 5) representing 16 master of science in athletic training programs participated in this study.
Main Outcome Measure(s)
Each program completed a virtual interview guided by a semistructured interview protocol. A 3-person data analysis team identified emerging domains and categories through a multiphase approach. Member checking, multiple researcher triangulation, and auditing were used to establish trustworthiness.
Results
Four domains emerged from participant responses: (1) curricular design, (2) preparatory experience and outcomes, (3) preceptor role, and (4) assessments. We found that intentionality in curricular design to promote progressive autonomy was beneficial in facilitating transition to practice through improved knowledge, skills, and confidence. Due to the importance of clinical education in AT preparation, preceptors played a critical role in creating opportunities for learning in the clinical environment. However, the degree to which experiences were autonomous and the effectiveness of the student in those encounters were rarely measured. Limitations outlined by participants included accreditation and program length.
Conclusions
The development of knowledge, skills, and confidence through intentionally selected clinical experiences and guided by preceptor feedback are critical to establishing an autonomous practitioner. However, the program’s evaluations were predominantly traditional practice assessments and informal feedback, instead of an assessment of readiness for practice.