Abstract. Explicit model checking with magnetic disk is prohibitively slow if file input/output (IO) is not carefully managed. We give an empirical analysis of the two published algorithms for model checking with magnetic disk and show that both algorithms minimize file IO time but are dominated by delayed duplicate detection time (which is required to avoid regenerating parts of the transition graph). We present and analyze a more time-efficient algorithm for model checking with magnetic disk that requires more file IO time, but less delayed duplicate detection time and less total execution time. The new algorithm is a variant of parallel partitioned hash table algorithms and uses a time-efficient chained hash table implementation.Model checking with magnetic disk can significantly increase the space available for storing visited states. In explicit model checking, visited states are stored to avoid generating duplicate states and to aid in termination detection. In this paper, we analyze the performance of the two published model checking algorithms for use with magnetic disk and find that, while file IO is an overhead in algorithms that use disk, delayed duplicate detection is the single largest overhead. We propose a new algorithm for explicit model checking with magnetic disk that requires more file IO time but reduces duplicate detection time and total execution time. The new algorithm solves large model checking problems in less time than other disk-based algorithms and solves small problems in 15% to 27% of the time required by the RAM-only algorithm.Delayed duplicate detection is an extra processing step added to search algorithms that use magnetic disk to store visited states. The delayed duplicate detection step compares recently generated states with a set of visited states. The purpose of this comparison is to determine if the recently generated states are duplicates of visited states or not. The set of already visited states is called the visited candidate set and the set of new states is called the duplicate candidate set. Each state in the duplicate candidate set may have a different visited candidate set. During delayed duplicate detection, each state in the duplicate candidate set is compared with the states in its visited candidate set. The cost of delayed duplicate detection is a multiple of the product of the size of the visited candidate set and the average size of the delayed candidate sets. Reducing the size of either candidate set reduces the cost of delayed duplicate detection.