In this article, we propose a simplified version of the maximum information per time unit method (MIT; Fan, Wang, Chang, & Douglas, Journal of Educational and Behavioral Statistics 37: 655-670, 2012), or MIT-S, for computerized adaptive testing. Unlike the original MIT method, the proposed MIT-S method does not require fitting a response time model to the individual-level response time data. It is also computationally efficient. The performance of the MIT-S method was compared against that of the maximum information (MI) method in terms of measurement precision, testing time saving, and item pool usage under various item response theory (IRT) models. The results indicated that when the underlying IRT model is the two-or three-parameter logistic model, the MIT-S method maintains measurement precision and saves testing time. It performs similarly to the MI method in exposure control; both result in highly skewed item exposure distributions, due to heavy reliance on the highly discriminating items. If the underlying model is the one-parameter logistic (1PL) model, the MIT-S method maintains the measurement precision and saves a considerable amount of testing time. However, its heavy reliance on time-saving items leads to a highly skewed item exposure distribution. This weakness can be ameliorated by using randomesque exposure control, which successfully balances the item pool usage. Overall, the MIT-S method with randomesque exposure control is recommended for achieving better testing efficiency while maintaining measurement precision and balanced item pool usage when the underlying IRT model is 1PL.Keywords Maximum information per time unit . Response time . Computerized adaptive testing . Item exposure control . Test efficiency Tailored testing or adaptive testing is known for its "efficiency" over traditional linear testing. The idea is to find the most suitable items from a large bank for each examinee. In the simplest case of educational testing, highly capable test takers should not be asked many easy questions and struggling test takers should not be presented with too many difficult questions. Delivering questions that are too easy may cause boredom while delivering items that are too difficult may cause anxiety or other forms of construct irrelevant meta-cognitive activity. In addition, responses to items that are too easy or too hard provide little information from the perspective of testing efficiency and are not helpful in quickly zeroing in on an examinee's ability.Adaptive testing seeks to avoid these difficulties by delivering assessment tasks that are tailored to each examinee's ability. When the maximum information (MI) method is used for item selection (Weiss, 1982) adaptive testing can achieve the same level of measurement precision with as few as half the number of items required of linear tests. Following this approach, the ability estimate of an examinee is updated every time he or she responds to a question (Lord, 1980). The MI method then identifies the most informative item in the ...