Earthquake nowcasting has been proposed as a means of tracking the change in large earthquake potential in a seismically active area. The method was developed using observable seismic data, in which probabilities of future large earthquakes can be computed using Receiver Operating Characteristic methods. Furthermore, analysis of the Shannon information content of the earthquake catalogs has been used to show that there is information contained in the catalogs, and that it can vary in time. So an important question remains, where does the information originate? In this paper, we examine this question using stochastic simulations of earthquake catalogs. Our catalog simulations are computed using an Earthquake Rescaled Aftershock Seismicity (“ERAS”) stochastic model. This model is similar in many ways to other stochastic seismicity simulations, but has the advantage that the model has only 2 free parameters to be set, one for the aftershock (Omori‐Utsu) time decay, and one for the aftershock spatial migration away from the epicenter. Generating a simulation catalog and fitting the two parameters to the observed catalog such as California takes only a few minutes of wall clock time. While clustering can arise from random, Poisson statistics, we show that significant information in the simulation catalogs arises from the “non‐Poisson” power‐law aftershock clustering, implying that the practice of de‐clustering observed catalogs may remove information that would otherwise be useful in forecasting and nowcasting. We also show that the nowcasting method provides similar results with the ERAS model as it does with observed seismicity.