We introduce a tunable measure for information leakage called maximal α-leakage. This measure quantifies the maximal gain of an adversary in inferring any (potentially random) function of a dataset from a release of the data. The inferential capability of the adversary is, in turn, quantified by a class of adversarial loss functions that we introduce as αloss, α ∈ [1, ∞) ∪ {∞}. The choice of α determines the specific adversarial action and ranges from refining a belief (about any function of the data) for α = 1 to guessing the most likely value for α = ∞ while refining the α th moment of the belief for α in between. Maximal α-leakage then quantifies the adversarial gain under α-loss over all possible functions of the data. In particular, for the extremal values of α = 1 and α = ∞, maximal αleakage simplifies to mutual information and maximal leakage, respectively. For α ∈ (1, ∞) this measure is shown to be the Arimoto channel capacity of order α. We show that maximal αleakage satisfies data processing inequalities and a sub-additivity property thereby allowing for a weak composition result. Building upon these properties, we use maximal α-leakage as the privacy measure and study the problem of data publishing with privacy guarantees, wherein the utility of the released data is ensured via a hard distortion constraint. Unlike average distortion, hard distortion provides a deterministic guarantee of fidelity. We show that under a hard distortion constraint, for α > 1 the optimal mechanism is independent of α, and therefore, the resulting optimal tradeoff is the same for all values of α > 1. Finally, the tunability of maximal α-leakage as a privacy measure is also illustrated for binary data with average Hamming distortion as the utility measure.