The remarkable success of machine learning has fostered a growing number of cloud-based intelligent services for mobile users. Such a service requires a user to send data, e.g. image, voice and video, to the provider, which presents a serious challenge to user privacy. To address this, prior works either obfuscate the data, e.g. add noise and remove identity information, or send representations extracted from the data, e.g. anonymized features. They struggle to balance between the service utility and data privacy because obfuscated data reduces utility and extracted representation may still reveal sensitive information.This work departs from prior works in methodology: we leverage adversarial learning to better balance between privacy and utility. We design a representation encoder that generates the feature representations to optimize against the privacy disclosure risk of sensitive information (a measure of privacy) by the privacy adversaries, and concurrently optimize with the task inference accuracy (a measure of utility) by the utility discriminator. The result is the privacy adversarial network (PAN), a novel deep model with the new training algorithm, that can automatically learn representations from the raw data. And the trained encoder can be deployed on the user side to generate representations that satisfy the task-defined utility requirements and the user-specified/agnostic privacy budgets.Intuitively, PAN adversarially forces the extracted representations to only convey information required by the target task. Surprisingly, this constitutes an implicit regularization that actually improves task accuracy. As a result, PAN achieves better utility and better privacy at the same time! We report extensive experiments on six popular datasets, and demonstrate the superiority of PAN compared with alternative methods reported in prior work. CCS Concepts: • Human-centered computing → Ubiquitous and mobile computing systems and tools; • Security and privacy → Usability in security and privacy. 144:2 • Liu et al.activity records, to the service provider, posing well-known privacy risks [1,2,9]. Our goal is to avoid disclosing raw data to service providers by creating a device-local intermediate component that encodes the raw data and only sends the encoded data to the service provider. And the encoded data must be both useful and private. For inference-based services, utility can be quantified by the inference accuracy, achieved by the service provider using a discriminative model. And Privacy can be quantified by the disclosure risk of private information.Existing solutions addressing the privacy concern struggle to balance between above two seemingly conflicting objectives: privacy vs. utility. An obvious and widely practiced solution is to transform the raw data into taskspecific features and upload features only, like Google Now [17] and Google Cloud [16]; This not only reduces the data utility but also is vulnerable to reverse models that reconstruct the raw data from extracted features [28]. The authors of...