A competency-based approach for colonoscopy training is particularly important, since the amount of practice required for proficiency varies widely between trainees. Though numerous objective proficiency assessment frameworks have been validated in the literature, these frameworks rely on expert observers. This process is time-consuming, and as a result, there has been increased interest in automated proficiency rating of colonoscopies. This work aims to investigate sixteen automatically computed performance metrics, and whether they can measure improvements in novices following a series of practice attempts. This involves calculating motion-tracking parameters for three groups: untrained novices, those same novices after undergoing training exercises, and experts. Both groups had electromagnetic tracking markers fixed to their hands and the scope tip. Each participant performed eight testing sequences designed by an experienced clinician. Novices were then trained on 30 phantoms and re-tested. The tracking data of these groups were analyzed using sixteen metrics computed by the Perk Tutor extension for Slicer. Statistical differences were calculated using a series of three t-tests, adjusting for multiple comparisons. All sixteen metrics were statistically different between pre-trained novices and experts, which provides evidence of their validity as measures of performance. Experts had fewer translational or rotational movements, a shorter and more efficient path, and performed the procedure faster. Pre-and post-trained novices did not significantly differ in average velocity, motion smoothness, or path inefficiency.