The published article can be found at https://doi.org/10. 1016/j.cose.2020.101951 Facial recognition technologies are implemented in many areas, including but not limited to, citizen surveillance, crime control, activity monitoring, and facial expression evaluation. However, processing biometric information is a resource-intensive task that often involves third-party servers, which can be accessed by adversaries with malicious intent. Biometric information delivered to untrusted thirdparty servers in an uncontrolled manner can be considered a significant privacy leak (i.e. uncontrolled information release) as biometrics can be correlated with sensitive data such as healthcare or financial records. In this paper, we propose a privacy-preserving technique for "controlled information release", where we disguise an original face image and prevent leakage of the biometric features while identifying a person. We introduce a new privacy-preserving face recognition protocol named PEEP (Privacy using EigEnface Perturbation) that utilizes local differential privacy. PEEP applies perturbation to Eigenfaces utilizing differential privacy and stores only the perturbed data in the third-party servers to run a standard Eigenface recognition algorithm. As a result, the trained model will not be vulnerable to privacy attacks such as membership inference and model memorization attacks. Our experiments show that PEEP exhibits a classification accuracy of around 70% -90% under standard privacy settings. individual face recognition for unlocking a mobile device to crowd surveillance. Companies have also invested heavily in this field; Google's facial recognition in the Google Glass project [1], Facebook's DeepFace technology [2], and Apple's patented face identification system [3] are examples of the growing number of facial identification systems. Existing face recognition technologies and the widespread use of biometrics introduce a serious threat to individuals' privacy, exacerbated by the fact that biometric identification is often done quietly, without proper consent from observed people. For example, the UK uses an estimated 4.2 million surveillance cameras to monitor public areas [4]. However, it is not feasible to obtain explicit consent from an extremely large number of persons being watched.Nevertheless, facial images directly reflect the owners' identity, and they can be easily linked to other sensitive information such as health records and financial records, raising privacy concerns. Biometric data analysis systems often need to employ high-performance third-party servers to conduct complex computational operations on large numbers of biometric data inputs. However, these third-party servers can be accessed by untrusted parties causing privacy issues.Among different definitions, information privacy can be defined as the "controlled information release" that permits an anticipated level of utility via a private function that protects the identity of the data owners [5]. Privacy-preserving face recognition involves at l...