Users who initiate continuous location queries are prone to trajectory information leakage, and the obtained query information is not effectively utilized. To address these problems, we propose a continuous location query protection scheme based on caching and an adaptive variable-order Markov model. When a user initiates a query request, we first query the cache information to obtain the required data. When the local cache cannot satisfy the user’s demand, we use a variable-order Markov model to predict the user’s future query location and generate a k-anonymous set based on the predicted location and cache contribution. We perturb the location set using differential privacy, then send the perturbed location set to the location service provider to obtain the service. We cache the query results returned by the service provider to the local device and update the local cache results according to time. By comparing the experiment with other schemes, the proposed scheme in this paper reduces the number of interactions with location providers, improves the local cache hit rate, and effectively ensures the security of the users’ location privacy.
With the rapid development of the Internet of Things, location-based services have emerged in many social and business fields. In obtaining the service, the user needs to transmit the query data to an untrusted location service provider for query and then obtain the required content. Most existing schemes tend to protect the user’s location privacy information while ignoring the user’s query privacy. This paper proposes a secure and effective query privacy protection scheme. The multi-user cache is used to store historical query results, reduce the number of communications between users and untrusted servers, and introduce trust computing for malicious users in neighbor caches, thereby reducing the possibility of privacy leakage. When the cache cannot meet the demand, the user’s location coordinates are converted using the Moore curve, processed using encryption technology, and sent to the location service provider to prevent malicious entities from accessing the transformed data. Finally, we simulate and evaluate the scheme on real datasets, and the experimental results demonstrate the safety and effectiveness of the scheme.
Distributed federated learning models are vulnerable to membership inference attacks (MIA) because they remember information about their training data. Through a comprehensive privacy analysis of distributed federated learning models, we design an attack model based on generative adversarial networks (GAN) and member inference attacks (MIA). Malicious participants (attackers) utilize the attack model to successfully reconstruct training sets of other regular participants without any negative impact on the global model. To solve this problem, we apply the differential privacy method to the training process of the model, which effectively reduces the accuracy of member inference attacks by clipping the gradient and adding noise to it. In addition, we manage the participants hierarchically through the method of trust domain division to alleviate the performance degradation of the model caused by differential privacy processing. Experimental results show that in distributed federated learning, our designed scheme can effectively defend against member inference attacks in white-box scenarios and maintain the usability of the global model, realizing an effective trade-off between privacy and usability.
Summary Location‐based services currently face two critical issues: an insufficient number of anonymous users and the problem of location semantic homogeneity. To prevent location homogeneity attacks, we suggest a blockchain‐based anonymization approach. This scheme introduces blockchain to store the anonymous process of the requesting user and collaborating user as evidence, establishes an incentive mechanism to promote cooperation between the two parties, and then selects users who meet the semantic threshold through the location semantic tree to construct the final anonymous set. The security analysis and simulation experiments demonstrate that the scheme suggested in this article can effectively motivate and constrain each user. The semantic security value is close to the maximum value of 1, preventing homogeneity attacks caused by location semantics and protecting users' location privacy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.