Simulation, and now digital twins, excel in estimating parameters associated with complex and timedependent stochastic processes, e.g., large airport, highway, and warehouse operations. Within such contexts, we consider statistical inference, whereby one seeks to quantify the error in the obtained simulation estimate. Historically, statistical inference in simulation has been considered challenging because parameters needing estimation are often complicated, and simulation output often autocorrelated and non-normal. However, we argue that the remarkably simple idea of batching can be used as an "omnibus" inference device in simulation settings. Batching for inference works in three simple steps: (i) divide simulation output data into overlapping batches; (ii) construct parameter estimates from each batch; and (iii) use the batch estimates, after accounting for dependence, to perform statistical inference. As we describe in this paper, the resulting procedures are usually trivial to implement in software, and are provably correct and efficient. Batching ideas originated in the 1950s, and have enjoyed steady development in the simulation community since the 1970s mostly within the problem of variance estimation. Our thesis is that batching ideas have much wider utility, and that batching should be considered alongside other resampling ideas such as the bootstrap or the jackknife in modern statistics.