Growth mixture models (GMMs) are prevalent for modeling unknown population heterogeneity via distinct latent classes. However, GMMs are riddled with convergence issues, often requiring researchers to atheoretically alter the model with cross-class constraints to obtain convergence. We discuss how within-class random effects in GMMs exacerbate convergence issues even though these random effects rarely help to answer typical research questions. That is, latent classes provide a discretization of continuous random effects, so including additional random effects within latent classes can unnecessarily complicate the model. These random effects are commonly included to properly specify the marginal covariance; however, random effects are inefficient for patterning a covariance matrix, resulting in estimation issues. Such a goal can be achieved more simply covariance pattern models, which we extend to the mixture model context in this paper (covariance pattern mixture models, CPMMs). We provide evidence from theory, simulation, and an empirical example showing that employing CPMMs (even if misspecified) instead of GMMs can circumvent computational difficulties that can plague GMMs without sacrificing the ability to answer the type of questions commonly asked in empirical studies. Results show advantages of CPMMs with respect to improved class enumeration, and less biased class-specific growth trajectories in addition to vastly improved convergence rates. Results also show that constraining covariance parameters across classes to bypass convergence issues with GMMs leads to poor results. An extensive software appendix is included to assist researchers run CPMMs in Mplus.