Incentive mechanism is the key to the success of the Bitcoin system as a permissionless blockchain. It encourages participants to contribute their computing resources to ensure the correctness and consistency of user transaction records. Selfish mining attacks, however, prove that Bitcoin’s incentive mechanism is not incentive-compatible, which is contrary to traditional views. Selfish mining attacks may cause the loss of mining power, especially those of honest participants, which brings great security challenges to the Bitcoin system. Although there are a series of studies against selfish mining behaviors, these works have certain limitations: either the existing protocol needs to be modified or the detection effect for attacks is not satisfactory. We propose the ForkDec, a high-accuracy system for selfish mining detection based on the fully connected neural network, for the purpose of effectively deterring selfish attackers. The neural network contains a total of 100 neurons (10 hidden layers and 10 neurons per layer), learned on a training set containing about 200,000 fork samples. The data set, used to train the model, is generated by a Bitcoin mining simulator that we preconstructed. We also applied ForkDec to the test set to evaluate the attack detection and achieved a detection accuracy of 99.03%. The evaluation experiment demonstrates that ForkDec has certain application value and excellent research prospects.
Machine unlearning is the process through which a deployed machine learning model is enforced to forget about some of its training data items. It normally generates two machine learning models, the original model and the unlearned model, indicating training results before and after data items are deleted. However, recent studies find that machine unlearning is vulnerable to membership inference attacks-as the directivity of training and nontraining data (i.e., data items in the training set have high posterior probabilities), the attackers can utilize this property to infer whether an item has been used for original model training. Nevertheless, such attacks are incapable in label-only settings, in which the attackers are infeasible to get the posteriors. In this paper, we propose a new label-only membership inference attack scheme targeted at machine unlearning to eliminate the dependence on posteriors. Our heuristic is that injected turbulence on candidate samples will present different behaviors for training and nontraining data. Thus, in our scheme, the attacker iteratively query on the original/unlearned models and inject turbulence to change their predicting labels; it determines whether an item is having-been-delated by observing the disturbance amplitude. Extensive experiments (i.e., on MNIST, CIFAR10, CIFAR100, and STL10 data sets) show that
Federated learning is an emerging paradigm that enables multiple organizations to jointly train a model without revealing their private data. As an important variant, vertical federated learning (VFL) deals with cases in which collaborating organizations own data of the same set of users but with disjoint features. It is generally regarded that VFL is more secure than horizontal federated learning. However, recent research (USENIX Security’22) reveals that it is still possible to conduct label inference attacks in VFL, in which attacker can acquire privately owned labels of other participants; even VFL constructed with model splitting (the kind of VFL architecture with higher security guarantee) cannot escape it. To solve this issue, in this paper, we propose the dispersed training framework. It utilizes secret sharing to break the correlations between the bottom model and the training data. Accordingly, even if the attacker receives the gradients in the training phase, he is incapable to deduce the feature representation of labels from the bottom model. Besides, we design a customized model aggregation method such that the shared model can be privately combined, and the linearity of secret sharing schemes ensures the training accuracy to be preserved. Theoretical and experimental analyses indicate the satisfactory performance and effectiveness of our framework.
Federated learning is an emerging paradigm that enables multiple organizations to jointly train a model without revealing their private data. As an important variant, vertical federated learning (VFL) deals with cases in which collaborating organizations own data of the same set of users but with disjoint features. It is generally regarded that VFL is more secure than horizontal federated learning. However, recent research (USENIX Security'22) reveals that it is still possible to conduct label inference attacks in VFL, in which attacker can acquire privately owned labels of other participants; even VFL constructed with model splitting (the kind of VFL architecture with higher security guarantee) cannot escape it. To solve this issue, in this paper, we propose the dispersed training framework. It utilizes secret sharing to break the correlations between the bottom model and the training data. Accordingly, even if the attacker receives the gradients in the training phase, he is incapable to deduce the feature representation of labels from the bottom model. Besides, we design a customized model aggregation method such that the shared model can be privately combined, and the linearity of secret sharing schemes ensures the training accuracy to be preserved. Theoretical and experimental analyses indicate the satisfactory performance and effectiveness of our framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.