2023
DOI: 10.1007/978-981-99-5177-2_5
|View full text |Cite
|
Sign up to set email alerts
|

Data Reconstruction Attack Against Principal Component Analysis

Abstract: Attacking machine learning models is one of the many ways to measure the privacy of machine learning models. Therefore, studying the performance of attacks against machine learning techniques is essential to know whether somebody can share information about machine learning models, and if shared, how much can be shared? In this work, we investigate one of the widely used dimensionality reduction techniques Principal Component Analysis (PCA). We refer to a recent paper that shows how to attack PCA using a Membe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 8 publications
0
1
0
Order By: Relevance
“…Today, there are many techniques available for creating synthetic datasets, from the Generative Adversarial Networks (GANs) to the most recent Diffusion Models. In a recent work, the authors showed a data reconstruction attack against Principal Component Analysis (PCA) [1]. The success of their proposed attack entirely relies on the quality of the synthetic data obtained using generative models.…”
Section: Introductionmentioning
confidence: 99%
“…Today, there are many techniques available for creating synthetic datasets, from the Generative Adversarial Networks (GANs) to the most recent Diffusion Models. In a recent work, the authors showed a data reconstruction attack against Principal Component Analysis (PCA) [1]. The success of their proposed attack entirely relies on the quality of the synthetic data obtained using generative models.…”
Section: Introductionmentioning
confidence: 99%