This paper studies the effect of covariance regularization for classification of high-dimensional data. This is done by fitting a mixture of Gaussians with a regularized covariance matrix to each class. Three data sets are chosen to suggest the results are applicable to any domain with high-dimensional data. The regularization needs of the data when pre-processed using the dimensionality reduction techniques principal component analysis (PCA) and random projection are also compared. Observations include that using a large amount of covariance regularization consistently provides classification accuracy as good if not better than using little or no covariance regularization. The results also indicate that random projection complements covariance regularization. *