A long-standing and fundamental issue in computer security is to control the flow of information, whether to prevent confidential information from being leaked, or to prevent trusted information from being tainted. While there have been many efforts aimed at preventing improper flows completely (see for example, the survey by Sabelfeld and Myers (2003)), it has long been recognized that perfection is often impossible in practice. A basic example is a login program -whenever it rejects an incorrect password, it unavoidably reveals that the secret password differs from the one that was entered. More subtly, systems may be vulnerable to side channel attacks, because observable characteristics like running time and power consumption may depend, at least partially, on sensitive information.For these reasons, the possibility of quantifying information flow becomes attractive, as this could allow certain improper flows to be tolerated on the grounds that they are 'small'. While there was early work on quantitative information flow by Denning (1983), Millen (1987, McLean (1990) and Gray (1991), the area received relatively little attention until the past decade, when it was revitalized starting with the efforts of Clark, Hunt, and Malacaria (2001).In the past decade, there has been too much work for a comprehensive survey here, but we can briefly describe the main themes that have been explored.From the perspective of foundations, there have been a variety of studies aimed at defining quantitative measures of information flow for a variety of system models, establishing the operational significance of the measures with respect to security, and establishing their mathematical properties, including relationships among the different measures that have been considered. Papers with a foundational focus include those of