The disclosure risk of synthetic/artificial data is still being determined. Studies show that synthetic data generation techniques generate similar data to the original data and sometimes even the exact original data. Therefore, publishing synthetic datasets can endanger the privacy of users. In our work, we study the synthetic data generated from different synthetic data generation techniques, including the most recent diffusion models. We perform a disclosure risk assessment of synthetic datasets via an attribute inference attack, in which an attacker has access to a subset of publicly available features and at least one synthesized dataset, and the aim is to infer the sensitive features unknown to the attacker. We also compute the predictive accuracy and F1 score of the random forest classifier trained on several synthetic datasets. For sensitive categorical features, we show that Attribute Inference Attack is not highly feasible or successful. In contrast, for continuous attributes, we can have an approximate inference. This holds true for the synthetic datasets derived from Diffusion models, GANs, and DPGANs, which shows that we can only have approximated Attribute Inference, not the exact Attribute Inference.