A malicious data-miner can infer users' private information in online social networks (OSNs) by data mining the users' disclosed information. By exploring the public information about a target user (i.e. an individual or a group of OSN users whose privacy is under attack), attackers can prepare a training data set holding similar information about other users who openly disclosed their data. Using a machine learning classifier, the attacker can input released information about users under attack as non-class attributes and extract the private information as a class attribute. Some techniques offer some privacy protection against specific classifiers, however, the provided privacy can be under threat if an attacker uses a different classifier (rather than the one used by the privacy protection techniques) to infer sensitive information. In reality, it is difficult to predict the classifiers involved in a privacy attack. In this study, we propose a privacy-preserving technique which first prepares a training data set in a similar way that an attacker can prepare and then takes an approach independent of the classifiers to extract patterns (or logic rules) from the training data set. Based on the extracted rule set, it then suggests