I contribute to stochastic modeling methodology in a theoretical framework spanning core decisions in the model's lifetime. These are predicting an out-of-sample unit's latent state even from non-series data, deciding when to start and stop learning about the state variable, and choosing models from important trade-offs. States evolve from linear dynamics with time-varying predictors and coefficients (drift) and generalized continuous noise (diffusion). Coefficients must address misprediction costs, data complexity, and distributional uncertainty (ambiguity) about the state's diffusion and stopping time. I exactly solve a stochastic dynamic program robust to worst-case costs from both uncertainties. The Bellman optimal coefficients extend generalized ridge regression by out-of-sample components impacting value changes given state changes. Performance issues trigger sequential analysis whether learning alternative models, given the effort, is better than keeping baseline. Learning is method-general and stops in fewest average attempts within decision errors. I derive preference functions for comparing models with state and cost-change constraints to decide a model, joint-time state and value distributions, and other properties beneficial to modelers.