Multiple-choice (MC) items are widely used in educational tests. Distractor analysis, an important procedure for checking the utility of response options within an MC item, can be readily implemented in the framework of item response theory (IRT). Although random guessing is a popular behavior of test-takers when answering MC items, none of the existing IRT models for distractor analysis have considered the influence of random guessing in this process. In this article, we propose a new IRT model to distinguish the influence of random guessing from response option functioning. A brief simulation study was conducted to examine the parameter recovery of the proposed model. To demonstrate its effectiveness, the new model was applied to the mathematics tests of the Hong Kong Diploma of Secondary Education Examination (HKDSE) from 2015 to 2019. The results of empirical analyses suggest that the complexity of item contents is a key factor in inducing students' random guessing. The implications and applications of the new model to other testing situations are also discussed.Multiple-choice (MC) questions are undoubtedly the most popular format used in existing academic assessments. An MC item is composed of a stem and several (usually 2−5) response options (Lord, 1977), from which test-takers are expected to select the only correct option or the best option among the given ones. The incorrect options are also referred to as "distractors." The design of distractors is directly related to the difficulty of an MC item. A good MC item should have an accurate identification of the correct options and distractors (Fellenz, 2004). Moreover, distractors should be sufficiently attractive. A distractor that fails to attract test-takers should be revised or dropped (Kehoe, 1995), because an MC item without effective distractors would be less discriminating (Haladyna & Downing, 1993). Thus, distractor analysis is necessary to ensure the measurement quality of MC items.Many methods can be used to detect implausible distractors (Gierl et al., 2017). An intuitive strategy is to observe the frequency of options used by test-takers. Specifically, practitioners can easily flag nonfunctioning distractors that are infrequently (e.g., <5%) used in a test (Rao et al., 2016;Tarrant et al., 2009). Additionally, to examine the discrimination of distractors, researchers have also calculated the mean score of test-takers who choose a distractor (Haladyna & Rodriguez, 2013) or the point-biserial correlation for a distractor (Attali & Fraenkel, 2000). Alternatively,