Data utility and data privacy are both serious issues that must be considered when datasets are released to use in big data analytics because they are traded-off issues. That is, high-utility datasets are generally high risks in terms of privacy violation issues. Likewise, datasets are formed to be high security in terms of privacy preservation, they often lead to data utility issues. To address these issues in datasets, several privacy preservation models have been proposed such as k-Anonymity, l-Diversity, t-Closeness, Anatomy, k-Likeness, and (lp1, . . . , lpn)-Privacy, i.e., all users’ explicit identifier values and all unique quasi-identifier values in datasets are removed and distorted respectively. Unfortunately, these privacy preservation models are static data models and still have data utility issues that must be addressed. Thus, they could be insufficient to address privacy violation issues in big data analytics. For this reason, a new efficient and effective privacy preservation model is proposed in this work such that it is based on aggregate query answers that can guarantee the confidence of the range and the minimized number of values that can be re-identified. The aims of the proposed privacy preservation model are that aside from privacy preservation constraints, the complexity and data utility are also maintained as much as possible. Furthermore, we show that the proposed model is a privacy preservation model that is efficient and effective by using extensive experiments.