We propose a method for the release of differentially private synthetic datasets. In many contexts, data contain sensitive values which cannot be released in their original form in order to protect individuals' privacy. Synthetic data is a protection method that releases alternative values in place of the original ones, and differential privacy (DP) is a formal guarantee for quantifying the privacy loss. We propose a method that maximizes the distributional similarity of the synthetic data relative to the original data using a measure known as the pMSE, while guaranteeing -differential privacy. Additionally, we relax common DP assumptions concerning the distribution and boundedness of the original data. We prove theoretical results for the privacy guarantee and provide simulations for the empirical failure rate of the theoretical results under typical computational limitations. We also give simulations for the accuracy of linear regression coefficients generated from the synthetic data compared with the accuracy of non-differentially private synthetic data and other differentially private methods. Additionally, our theoretical results extend a prior result for the sensitivity of the Gini Index to include continuous predictors. name only a few applications.While DP is a rigorous risk measure, it has lacked flexible methods for modeling and generating synthetic data. Non-differentially private synthetic data methods (e.g., see Raghunathan et al. (2003);Reiter (2005Reiter ( , 2002; Drechsler (2011); Raab et al. (2017)) while not offering provable privacy, provide good tools for approximating accurate generative models reflecting the original data. Our proposed method maintains the flexible modeling approach of synthetic data methodology, and in addition maximizes a metric of distributional similarity, the pMSE, between the released synthetic data and the original data, subject to satisfying -DP. We also do not require one of the most common DP assumptions concerning the input data, namely that it is bounded, and we do not limit ourselves to only categorical or discrete data. We find that our method produces good results in simulations, and it provides a new avenue for releasing DP datasets for potentially a wide-range of applications.