Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities. Previous work addressing privacy issues for language models has mostly focused on data preprocessing and differential privacy methods, both requiring re-training the underlying LM. We propose knowledge unlearning as an alternative method to reduce privacy risks for LMs post hoc. We show that simply applying the unlikelihood training objective to target token sequences is effective at forgetting them with little to no degradation of general language modeling performances; it sometimes even substantially improves the underlying LM with just a few iterations. We also find that sequential unlearning is better than trying to unlearn all the data at once and that unlearning is highly dependent on which kind of data (domain) is forgotten. By showing comparisons with a previous data preprocessing method known to mitigate privacy risks for LMs, we show that unlearning can give a stronger empirical privacy guarantee in scenarios where the data vulnerable to extraction attacks are known a priori while being orders of magnitude more computationally efficient. We release the code and dataset needed to replicate our results at https://github.com/joeljang/knowledge-unlearning.
Early detection of faults in rotating machinery systems is crucial in preventing system failure, increasing safety, and reducing maintenance costs. Current methods of fault detection suffer from the lack of efficient feature extraction method, the need for designating a threshold producing minimal false alarm rates, and the need for expert domain knowledge, which is costly. In this paper, we propose a novel data-driven health division method based on convolutional neural networks using a graphical representation of time series data, called a nested scatter plot. The proposed method trains the model with a small amount of labeled data and does not require a threshold value to predict the health state of rotary machines. Notwithstanding the lack of datasets that show the ground truth of health stages, our experiments with two open datasets of run-to-failure bearing demonstrated that our method is able to detect the early symptoms of bearing wear earlier and more efficiently than other threshold-based health indicator methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.