This paper presents a novel method for sampling based on a distribution function. By leveraging Renyi's divergence criterion, a recursive formulation is directly derived from the disparity between the original and estimated distributions. The Gradient descent algorithm is employed to achieve this recursive equation, and then new samples are derived. Since this method uses the distribution of the original data, it has the ability to preserve the data distribution in the sampled dataset. In instances where the original distribution is unknown, Kernel Density Estimation is employed for estimation. Experimental results demonstrate that the proposed method effectively preserves the data distribution and prevents any alteration in the data concept while maintaining the predictability of the model over selected instances. More precisely, this method has successfully reduced the dataset size by approximately $70\%$, while the accuracy of the learning algorithm has remained unaffected.