In 2011 Bhaskar et al. pointed out that in many cases one can ensure sufficient level of privacy without adding noise by utilizing adversarial uncertainty. Informally speaking, this observation comes from the fact that if at least a part of the data is randomized from the adversary's point of view, it can be effectively used for hiding other values.So far the approach to that idea in the literature was mostly purely asymptotic, which greatly limited its adaptation in real-life scenarios. In this paper we aim to make the concept of utilizing adversarial uncertainty not only an interesting theoretical idea, but rather a practically useful technique, complementary to differential privacy, which is the state-of-the-art definition of privacy. This requires non-asymptotic privacy guarantees, more realistic approach to the randomness inherently present in the data and to the adversary's knowledge.In our paper we extend the concept proposed by Bhaskar et al. and present some results for wider class of data. In particular we cover the data sets that are dependent. We also introduce rigorous adversarial model. Moreover, in contrast to most of previous papers in this field, we give detailed (non-asymptotic) results which is motivated by practical reasons. Note that it required a modified approach and more subtle mathematical tools, including Stein method which, to the best of our knowledge, was not used in privacy research before.Apart from that, we show how to combine adversarial uncertainty with differential privacy approach and explore synergy between them to enhance the privacy parameters already present in the data itself by adding small amount of noise.