2006
DOI: 10.1198/000313006x124640
|View full text |Cite
|
Sign up to set email alerts
|

A Framework for Evaluating the Utility of Data Altered to Protect Confidentiality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
125
0
1

Year Published

2009
2009
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 166 publications
(126 citation statements)
references
References 18 publications
0
125
0
1
Order By: Relevance
“…This disclosure risk must be weighted by the data releaser against the complementary concept of data utility, which measures the value of the released data source to the legitimate data user. In general, decreasing the amount of disclosure risk by applying more stringent geographic masking processes also decreases the accuracy of inferences obtainable from the released data source (Karr et al, 2006). While there is a wide understanding of this tradeoff between disclosure risk and data utility, there is no consensus on a particular methodology to visualize and share confidential data without dramatically limiting any analyses (Curtis et al, 2011).…”
Section: Literature Reviewmentioning
confidence: 99%
“…This disclosure risk must be weighted by the data releaser against the complementary concept of data utility, which measures the value of the released data source to the legitimate data user. In general, decreasing the amount of disclosure risk by applying more stringent geographic masking processes also decreases the accuracy of inferences obtainable from the released data source (Karr et al, 2006). While there is a wide understanding of this tradeoff between disclosure risk and data utility, there is no consensus on a particular methodology to visualize and share confidential data without dramatically limiting any analyses (Curtis et al, 2011).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Reiter (2012) mentions, without presenting numerical results, that the comparison of measures based on specific models is often done informally. If the regression coefficients obtained from original and perturbed data are considered close, for example if the confidence intervals obtained from the models largely overlap, the released data have high utility for that particular analysis (see also Karr et al 2006). …”
Section: General Methods For Measuring Data Utilitymentioning
confidence: 99%
“…They concentrate only on data utility measures and do not account for disclosure risk. Karr et al (2006) propose measures based on differences between inferences on original and perturbed data that are tailored to normally distributed data, and they also use the propensity score method in Oganian and Karr (2006). Reiter (2012) mentions, without presenting numerical results, that the comparison of measures based on specific models is often done informally.…”
Section: General Methods For Measuring Data Utilitymentioning
confidence: 99%
“…In particular, we estimate a AR(2) process for each of X k t , X s k t , and X (i) k t . We then assess the number of missing time-series estimates (repeated suppressions in X s k t may lead to time-series that are too short), the number of significant coefficients for the first lag of the AR (2) 1 , and the interval overlap measure J k as suggested by [12]. Table 2 presents these results for job creation births.…”
Section: Analytical Validitymentioning
confidence: 99%