Item nonresponses are prevalent in standardized testing. They happen either when students fail to reach the end of a test due to a time limit or quitting, or when students choose to omit some items strategically. Oftentimes, item nonresponses are nonrandom, and hence, the missing data mechanism needs to be properly modeled. In this paper, we proposed to use an innovative item response time model as a cohesive missing data model to account for the two most common item nonresponses: not-reached items and omitted items. In particular, the new model builds on a behavior process interpretation: a person chooses to skip an item if the required effort exceeds the implicit time the person allocates to the item (Lee & Ying, 2015; Wolf, Smith, & Birnbaum, 1995), whereas a person fails to reach the end of the test due to lack of time. This assumption was verified by analyzing the 2015 PISA computerbased mathematics data. Simulation studies were conducted to further evaluate the performance of the proposed Bayesian estimation algorithm for the new model and to compare the new model with a recently proposed "speed-accuracy + omission" model (Ulitzsch, von Davier, & Pohl, 2019). Results revealed that all model parameters could recover properly, and inadequately accounting for missing data caused biased item and person parameter estimates.