In recent years, powered by state-of-the-art achievements in a broad range of areas, machine learning has received considerable attention from the healthcare sector. Despite their ability to provide solutions within personalized medicine, strict regulations on the confidentiality of patient health information have in many cases hindered the adoption of deep learning-based solutions in clinical workflows. To allow for the processing of sensitive health information without disclosing the underlying data, we propose a solution based on fully homomorphic encryption (FHE). The considered encryption scheme, MORE (Matrix Operation for Randomization or Encryption), enables the computations within a neural network model to be directly performed on floating point data with a relatively small computational overhead. We consider the well-known MNIST digit recognition problem to evaluate the feasibility of the proposed method and show that performance does not decrease when deep learning is applied on MORE homomorphic data. To further evaluate the suitability of the method for healthcare applications, we first train a model on encrypted data to estimate the outputs of a whole-body circulation (WBC) hemodynamic model and then provide a solution for classifying encrypted X-ray coronary angiography medical images. The findings highlight the potential of the proposed privacy-preserving deep learning methods to outperform existing approaches by providing, within a reasonable amount of time, results equivalent to those achieved by unencrypted models. Lastly, we discuss the security implications of the encryption scheme and show that while the considered cryptosystem promotes efficiency and utility at a lower security level, it is still applicable in certain practical use cases.
Motivated by state-of-the-art performances across a wide variety of areas, over the last few years Machine Learning has drawn a significant amount of attention from the healthcare domain. Despite their potential in enabling personalized medicine applications, the adoption of Deep Learning based solutions in clinical workflows has been hindered in many cases by the strict regulations concerning the privacy of patient health data. We propose a solution that relies on Fully Homomorphic Encryption, particularly on the MORE scheme, as a mechanism for enabling computations on sensitive health data, without revealing the underlying data. The chosen variant of the encryption scheme allows for the computations in the Neural Network model to be directly performed on floating point numbers, while incurring a reasonably small computational overhead. For feasibility evaluation, we demonstrate on the MNIST digit recognition task that Deep Learning can be performed on encrypted data without compromising the accuracy. We then address a more complex task by training a model on encrypted data to classify X-ray coronary angiography views. These results underline the potential of the proposed approach to outperform current solutions by delivering comparable results to the unencrypted Deep Learning based solutions, in a reasonable amount of time. Lastly, the security aspects of the encryption scheme are analyzed, and we show that, even though the chosen encryption scheme favors performance and utility at the cost of weaker security, it can still be used in certain practical applications.
Deep learning (DL)-based algorithms have demonstrated remarkable results in potentially improving the performance and the efficiency of healthcare applications. Since the data typically needs to leave the healthcare facility for performing model training and inference, e.g., in a cloud based solution, privacy concerns have been raised. As a result, the demand for privacy-preserving techniques that enable DL model training and inference on secured data has significantly grown. We propose an image obfuscation algorithm that combines a variational autoencoder (VAE) with random non-bijective pixel intensity mapping to protect the content of medical images, which are subsequently employed in the development of DL-based solutions. A binary classifier is trained on secured coronary angiographic frames to evaluate the utility of obfuscated images in the context of model training. Two possible attack configurations are considered to assess the security level against artificial intelligence (AI)-based reconstruction attempts. Similarity metrics are employed to quantify the security against human perception (structural similarity index measure and peak signal-to-noise-ratio). Furthermore, expert readers performed a visual assessment to determine to what extent the reconstructed images are protected against human perception. The proposed algorithm successfully enables DL model training on obfuscated images with no significant computational overhead while ensuring protection against human eye perception and AI-based reconstruction attacks. Regardless of the threat actor’s prior knowledge of the target content, the coronary vessels cannot be entirely recovered through an AI-based attack. Although a drop in accuracy can be observed when the classifier is trained on obfuscated images, the performance is deemed satisfactory in the context of a privacy–accuracy trade-off.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.