Fairness in advertising is a topic of particular concern motivated by theoretical and empirical observations in both the computer science and economics literature. We examine the problem of fairness in advertising for general purpose platforms that service advertisers from many different categories. First, we propose inter-category and intra-category fairness desiderata that take inspiration from individual fairness and envy-freeness. Second, we investigate the "platform utility" (a proxy for the quality of the allocation) achievable by mechanisms satisfying these desiderata. More specifically, we compare the utility of fair mechanisms against the unfair optimal, and we show by construction that our fairness desiderata are compatible with utility. That is, we construct a family of fair mechanisms with high utility that perform close to optimally within a class of fair mechanisms. Our mechanisms also enjoy nice implementation properties including metric-obliviousness, which allows the platform to produce fair allocations without needing to know the specifics of the fairness requirements.
Despite excellent theoretical support, Differential Privacy (DP) can still be a challenge to implement in practice. In part, this challenge is due to the very real concerns associated with converting arbitrary or infinite-precision theoretical mechanisms to the often messy realities of floating point or fixed-precision. Beginning with the troubling result of Mironov demonstrating the security issues of using floating point for implementing the Laplace mechanism, there have been many reasonable concerns raised on the vulnerabilities of real-world implementations of DP.In this work, we examine the practicalities of implementing the exponential mechanism of McSherry and Talwar. We demonstrate that naive or malicious implementations can result in catastrophic privacy failures. To address these problems, we show that the mechanism can be implemented exactly for a rich set of values of the privacy parameter ε and utility functions with limited practical overhead in running time and minimal code complexity.How do we achieve this result? We employ a simple trick of switching from base e to base 2, allowing us to perform precise base 2 arithmetic. A short, precise expression is always available for ε, and the only approximation error we incur is the conversion of the base-2 privacy parameter back to base e for reporting purposes. The core base 2 arithmetic of the mechanism can be simply and efficiently implemented using open-source high precision floating point libraries. Furthermore, the exact nature of the implementation lends itself to simple monitoring of correctness and proofs of privacy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.