The widespread use of the Internet of Things (IoT) has raised many concerns, including the protection of private information. Existing privacy preservation methods cannot provide a good balance between data utility and privacy, and also have problems with efficiency and scalability. This paper proposes an efficient data stream perturbation method (named as P 2 RoCAl). P 2 RoCAl offers better data utility than similar methods: classification accuracies of P 2 RoCAl perturbed data streams are very close to those of the original data streams. P 2 RoCAl also provides higher resilience against data reconstruction attacks.
Industrial internet of things (IIoT) is revolutionizing many leading industries such as energy, agriculture, mining, transportation, and healthcare. IIoT is a major driving force for Industry 4.0, which heavily utilizes machine learning (ML) to capitalize on the massive interconnection and large volumes of IIoT data. However, ML models that are trained on sensitive data tend to leak privacy to adversarial attacks, limiting its full potential in Industry 4.0. This paper introduces a framework named PriModChain that enforces privacy and trustworthiness on IIoT data by amalgamating differential privacy, federated ML, Ethereum blockchain, and smart contracts. The feasibility of PriModChain in terms of privacy, security, reliability, safety, and resilience was evaluated using simulations developed in Python with socket programming on a general-purpose computer. We used Ganache v2.0.1 local test network for the local experiments and Kovan test network for the public blockchain testing. We verified the proposed security protocol using Scyther v1.1.3 protocol verifier.
The published article can be found at https://doi.org/10. 1016/j.cose.2020.101951 Facial recognition technologies are implemented in many areas, including but not limited to, citizen surveillance, crime control, activity monitoring, and facial expression evaluation. However, processing biometric information is a resource-intensive task that often involves third-party servers, which can be accessed by adversaries with malicious intent. Biometric information delivered to untrusted thirdparty servers in an uncontrolled manner can be considered a significant privacy leak (i.e. uncontrolled information release) as biometrics can be correlated with sensitive data such as healthcare or financial records. In this paper, we propose a privacy-preserving technique for "controlled information release", where we disguise an original face image and prevent leakage of the biometric features while identifying a person. We introduce a new privacy-preserving face recognition protocol named PEEP (Privacy using EigEnface Perturbation) that utilizes local differential privacy. PEEP applies perturbation to Eigenfaces utilizing differential privacy and stores only the perturbed data in the third-party servers to run a standard Eigenface recognition algorithm. As a result, the trained model will not be vulnerable to privacy attacks such as membership inference and model memorization attacks. Our experiments show that PEEP exhibits a classification accuracy of around 70% -90% under standard privacy settings. individual face recognition for unlocking a mobile device to crowd surveillance. Companies have also invested heavily in this field; Google's facial recognition in the Google Glass project [1], Facebook's DeepFace technology [2], and Apple's patented face identification system [3] are examples of the growing number of facial identification systems. Existing face recognition technologies and the widespread use of biometrics introduce a serious threat to individuals' privacy, exacerbated by the fact that biometric identification is often done quietly, without proper consent from observed people. For example, the UK uses an estimated 4.2 million surveillance cameras to monitor public areas [4]. However, it is not feasible to obtain explicit consent from an extremely large number of persons being watched.Nevertheless, facial images directly reflect the owners' identity, and they can be easily linked to other sensitive information such as health records and financial records, raising privacy concerns. Biometric data analysis systems often need to employ high-performance third-party servers to conduct complex computational operations on large numbers of biometric data inputs. However, these third-party servers can be accessed by untrusted parties causing privacy issues.Among different definitions, information privacy can be defined as the "controlled information release" that permits an anticipated level of utility via a private function that protects the identity of the data owners [5]. Privacy-preserving face recognition involves at l...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.