IRT-based adaptive grading tests are designed to assign examinees to one of several grading categories. The item search algorithm in these tests can be based on either a golden section search, a Z-score, or an EAP-based search; these methods result, respectively, in the golden search grading test (GGT), the Z-score grading test (ZGT), and the EAP grading test (EGT). Grade assignments are evaluated after each item is administered and after the current trait estimate ([UNKNOWN]) has been determined. A test is terminated based on one of three conditions: (1) [UNKNOWN] is between two cutoff scores; (2) [UNKNOWN] is above or below the highest or lowest cutoff scores, respectively; or (3) a prespecified maximum number of items has been administered. Monte carlo studies using actual ACT Mathematics test item parameters showed that all three strategies effectively assigned examinees into multiple achievement grade levels. EGT had more correct classifications in the middle range of grade levels and more classification errors for extreme true grade levels than CGT and ZGT. However, when assuming simulees with a relatively larger variance priordistribution, or using maximum likelihood estimation for final [UNKNOWN] estimates, the systematic classification errors disappeared in EGT. GGT and ZGT required fewer items than EGT to terminate the test.