Atmospheric chemistry models—used as components in models that simulate air pollution and climate change—are computationally expensive. Previous studies have shown that machine-learned atmospheric chemical solvers can be orders of magnitude faster than traditional integration methods but tend to suffer from numerical instability. Here, we present a modeling framework that reduces error accumulation compared to previous work while maintaining computational efficiency. Our approach is novel in that it: 1) uses a recurrent training regime that results in extended (>1 week) simulations without runaway error accumulation, and 2) can reversibly compress the number of modeled chemical species by >80% without further decreasing accuracy. We observe a ~260× reduction in computation time (~1900× when run on specialized hardware) compared to the traditional solver. We use random initial conditions in training to promote general applicability across a wide range of atmospheric conditions. For ozone (with an initial concentration range of 0–70 ppb), our model predictions over a 24-hour simulation period match those of the traditional solvers with median error of 2.7 ppb and less than 19 ppb error across 99% of simulations initialized with random noise. Error can be significantly higher in the remaining 1% of simulations, which include among the most extreme concentration fluctuations simulated by the reference model. Results are similar for total particulate matter (median error of 16 ug/m3 and <32 ug/m3 across 99% of simulations with concentrations ranging from 0-150 ug/m3). Finally, we discuss practical implications of our choice of modeling framework and next steps for improving performance. The machine learning models described here are not yet suitable replacements for traditional chemistry solvers but represent a step toward that goal.