Background:
Multiple-choice test questions are among the main measures of knowledge used by educators in the postlicensure environment; however, these tests are often constructed in the absence of guidelines or the means to evaluate examination quality after administration.
Method:
Previously, guidance was provided on constructing quality test items. Here we present instruction for professional development specialists to use postadministration test data for an item analysis, providing insight on test flaws and opportunities for iterative examination improvement.
Results:
The topics of item difficulty, index of discrimination, and distractor analysis are covered for independent analysis, and topics such as reliability are addressed for those who have access to a formal program.
Conclusion:
Three levels of strategies are described: using a learning management system for item analysis, using free open-source software, and using a minimal standards method of evaluating test items.
[
J Contin Educ Nurs
. 202x;5x(x):xx–xx.]