The use of synthetic data in pharmacology research has gained significant attention due to its potential to address privacy concerns and promote open science. In this study, we implemented and compared three synthetic data generation methods, CT‐GAN, TVAE, and a simplified implementation of Avatar, for a previously published pharmacogenetic dataset of 253 patients with one measurement per patient (non‐longitudinal). The aim of this study was to evaluate the performance of these methods in terms of data utility and privacy trade off. Our results showed that CT‐GAN and Avatar used with k = 10 (number of patients used to create the local model of generation) had the best overall performance in terms of data utility and privacy preservation. However, the TVAE method showed a relatively lower level of performance in these aspects. In terms of Hazard ratio estimation, Avatar with k = 10 produced HR estimates closest to the original data, whereas CT‐GAN slightly underestimated the HR and TVAE showed the most significant deviation from the original HR. We also investigated the effect of applying the algorithms multiple times to improve results stability in terms of HR estimation. Our findings suggested that this approach could be beneficial, especially in the case of small datasets, to achieve more reliable and robust results. In conclusion, our study provides valuable insights into the performance of CT‐GAN, TVAE, and Avatar methods for synthetic data generation in pharmacogenetic research. The application to other type of data and analyses (data driven) used in pharmacology should be further investigated.