Many innovative item formats have been proposed over the past decade, but little empirical research has been conducted on their measurement properties. This study examines the reliability, efficiency, and construct validity of two innovative item formats-the figural response (FR) and constructed response (CR) formats used in a K-12 computerized science test. The item response theory (IRT) information function and confirmatory factor analysis (CFA) were employed to address the research questions. It was found that the FR items were similar to the multiple-choice (MC) items in providing information and efficiency, whereas the CR items provided noticeably more information than the MC items but tended to provide less information per minute. The CFA suggested that the innovative formats and the MC format measure similar constructs. Innovations in computerized item formats are reviewed, and the merits as well as challenges of implementing the innovative formats are discussed.Recent years have seen rapid growth in the use of computers in educational and licensure/certification testing. Many tests are now administered on computer, and software has been developed to facilitate all aspects of test development. On paper-and-pencil tests, multiple-choice (MC) items are the dominant testing format (Scalise & Gifford, 2008). Now armed with new technologies, test users and developers have shown increased interest in employing innovative item formats.Many innovative item formats have emerged over the past decade, but little empirical research has been conducted on their measurement properties. The purpose of this study is to empirically examine the reliability, efficiency, and Correspondence should be addressed to Lei Wan,