Privacy-preserving data publishing (PPDP) is an essential prerequisite for data-driven AI technologies, (such as data mining, machine learning, deep learning, etc.) to extract knowledge from data safely and legally. It has, as it should be, been studied and explored as a hot topic in the last decade. However, existing privacy protection mechanisms cannot take into account the following three aspects: preventing background attack, maximizing data availability, and resisting sensitive information mining. In this work, we propose a novel privacy-preserving data publishing framework, which protects privacy by releasing simulated data instead of real data. It is explored for generating data similar to the distribution of the real data by using Bayesian network. It consists of two ingredients. First, we transform the problem of data publication into the generation process of a Bayesian network, and correspondingly, the problem of privacy leakage is transformed into one kind of Bayesian inference attack. Second, we propose a re-anonymity framework, named (d, L)-injection, which flexibly resolves the impact of increased privacy protection strength on data availability. In addition, we transplant three classical privacy-preserving strategies to the generated Bayesian network, and demonstrates the effectiveness of the method through three public data sets from multiple application domains.