Estimation of the covariance matrix for high-dimensional multivariate datasets is a challenging and important problem in modern statistics. In this paper, we focus on high-dimensional Gaussian DAG models where sparsity is induced on the Cholesky factor L of the inverse covariance matrix.In recent work, ([Cao, Khare, and Ghosh, 2019]), we established high-dimensional sparsity selection consistency for a hierarchical Bayesian DAG model, where an Erdos-Renyi prior is placed on the sparsity pattern in the Cholesky factor L, and a DAG-Wishart prior is placed on the resulting nonzero Cholesky entries. In this paper we significantly improve and extend this work, by (a) considering more diverse and effective priors on the sparsity pattern in L, namely the beta-mixture prior and the multiplicative prior, and (b) establishing sparsity selection consistency under significantly relaxed conditions on p, and the sparsity pattern of the true model. We demonstrate the validity of our theoretical results via numerical simulations, and also use further simulations to demonstrate that our sparsity selection approach is competitive with existing state-of-the-art methods including both frequentist and Bayesian approaches in various settings.