BACKGROUND: To measure the effectiveness of an educational intervention, it is essential to develop high-quality, validated tools to assess a change in knowledge or skills after an intervention. An identified gap within the field of neurology is the lack of a universal test to examine knowledge of neurological assessment. METHODS: This instrument development study was designed to determine whether neuroscience knowledge as demonstrated in a Neurologic Assessment Test (NAT) was normally distributed across healthcare professionals who treat patients with neurologic illness. The variables of time, knowledge, accuracy, and confidence were individually explored and analyzed in SAS. RESULTS: The mean (standard deviation) time spent by 135 participants to complete the NAT was 12.9 (3.2) minutes. The mean knowledge score was 39.5 (18.2), mean accuracy was 46.0 (15.7), and mean confidence was 84.4 (24.4). Despite comparatively small standard deviations, Shapiro-Wilk scores indicate that the time spent, knowledge, accuracy, and confidence are nonnormally distributed (P < .0001). The Cronbach α was 0.7816 considering all 3 measures (knowledge, accuracy, and confidence); this improved to an α of 0.8943 when only knowledge and accuracy were included in the model. The amount of time spent was positively associated with higher accuracy (r
2 = 0.04, P < .05), higher knowledge was positively associated with higher accuracy (r
2 = 0.6543, P < .0001), and higher knowledge was positively associated with higher confidence (r
2 = 0.4348, P < .0001). CONCLUSION: The scores for knowledge, confidence, and accuracy each had a slightly skewed distribution around a point estimate with a standard deviation smaller than the mean. This suggests initial content validity in the NAT. There is adequate initial construct validity to support using the NAT as an outcome measure for projects that measure change in knowledge. Although improvements can be made, the NAT does have adequate construct and content validity for initial use.