Privacy risk assessments aim to analyze and quantify the privacy risks associated with new systems. As such, they are critically important in ensuring that adequate privacy protections are built in. However, current methods to quantify privacy risk rely heavily on experienced analysts picking the "correct" risk level on e.g. a five-point scale.In this paper, we argue that a more scientific quantification of privacy risk increases accuracy and reliability and can thus make it easier to build privacy-friendly systems. We discuss how the impact and likelihood of privacy violations can be decomposed and quantified, and stress the importance of meaningful metrics and units of measurement. We suggest a method of quantifying and representing privacy risk that considers a collection of factors as well as a variety of contexts and attacker models. We conclude by identifying some of the major research questions to take this approach further in a variety of application scenarios.Data Protection Regulation) [11], PIAs (called "data protection impact assessments") are mandated for some cases, including surveillance, data sharing, and new technologies. This is relevant worldwide because of the GDPR's global reach. As PIAs include consultation with stakeholders, they are also a useful mechanism for obtaining their buy-in in what might otherwise be seen as "creepy" data processing processes.However, the wider application and impact of PIAs may be limited because privacy risk assessments currently rely heavily on experience, analogy and imagination, that is, risk assessment more closely resembles an art than a science. We argue that a more scientific approach to risk assessment can improve the outcomes of privacy impact assessments by making them more consistent and systematic. Beyond the use to measure and communicate an individual privacy risk, we envision uses of these privacy risk metrics for at least five more purposes: to quantify the effect of privacy controls, to compare the effects of different controls, to analyze trends in privacy risk over time, to compute a system's aggregate privacy risk from its components, and to rank privacy risks.Contributions. In this paper, we investigate how to quantify privacy risk systematically with the aim of moving privacy risk assessment from being an art closer to being a science. We focus on data driven privacy (i.e. the impact of data decisions, possibly outside the data sphere) because this is the scope of the GDPR, currently the strongest driver of PIAs. In line with the common decomposition of risk into impact and likelihood, we discuss quantification of impact and likelihood separately and suggest possible metrics for each (Sections 4 and 5). We then discuss how metrics for impact and likelihood can be combined to form privacy risk metrics that can be used directly in privacy impact assessments and privacy requirements engineering (Section 6). We illustrate an initial approach to measuring and representing privacy risk in a case study with two typical known privacy risks (Section...