2020
DOI: 10.48550/arxiv.2003.05362
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Privacy-Preserving Adversarial Network (PPAN) for Continuous non-Gaussian Attributes

Abstract: A privacy-preserving adversarial network (PPAN) was recently proposed as an information-theoretical framework to address the issue of privacy in data sharing. The main idea of this model was using mutual information as the privacy measure and adversarial training of two deep neural networks, one as the mechanism and another as the adversary. The performance of the PPAN model for the discrete synthetic data, MNIST handwritten digits, and continuous Gaussian data was evaluated compared to the analytically opti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…Following [21], the reward function is inversely interpreted as a loss function: (s t , a t ) = −r (s t , a t ). Assuming a privacy signal f (s t , a t ) and an electricity cost signal g(s t , a t ), the one-step loss function can be defined as follows: (8) where λ ∈ [0, 1] controls the privacy-cost trade-off. Concretely, for λ = 0 the goal of the agent will be to minimize the expected cumulative privacy signal, while for λ = 1 it will be to minimize the expected cumulative energy cost.…”
Section: B Markov Decision Process (Mdp) Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Following [21], the reward function is inversely interpreted as a loss function: (s t , a t ) = −r (s t , a t ). Assuming a privacy signal f (s t , a t ) and an electricity cost signal g(s t , a t ), the one-step loss function can be defined as follows: (8) where λ ∈ [0, 1] controls the privacy-cost trade-off. Concretely, for λ = 0 the goal of the agent will be to minimize the expected cumulative privacy signal, while for λ = 1 it will be to minimize the expected cumulative energy cost.…”
Section: B Markov Decision Process (Mdp) Modelmentioning
confidence: 99%
“…Calculate reward r(s t , a t ) from equation (8). Update the next state s t+1 based on (6), (7) and observing y t+1 Import (s t , a t , r(s t , a t ), s t+1 ) into the replay buffer I.…”
mentioning
confidence: 99%