Images perturbed subtly to be misclassified by neural networks, called adversarial examples, have emerged as a technically deep challenge and an important concern for several application domains. Most research on adversarial examples takes as its only constraint that the perturbed images are similar to the originals. However, real-world application of these ideas often requires the examples to satisfy additional objectives, which are typically enforced through custom modifications of the perturbation process. In this paper, we propose adversarial generative nets (AGNs), a general methodology to train a generator neural network to emit adversarial examples satisfying desired objectives. We demonstrate the ability of AGNs to accommodate a wide range of objectives, including imprecise ones difficult to model, in two application domains. In particular, we demonstrate physical adversarial examples-eyeglass frames designed to fool face recognition-with better robustness, inconspicuousness, and scalability than previous approaches, as well as a new attack to fool a handwritten-digit classifier. contexts in which it is necessary to model additional objectives of adversarial inputs. For example, our prior work considered a scenario in which adversaries could not manipulate input images directly, but, rather, could only manipulate the physical artifacts captured in such images [68]. Using eyeglasses for fooling face-recognition systems as a driving example, we showed how to encode various objectives into the process of generating eyeglass frames, such as ensuring that the frames were capable of being physically realized by an off-the-shelf printer. As another example, Evtimov et al. considered generating shapes that, when attached to street signs, would seem harmless to human observers, but would lead neural networks to misclassify the signs [20].These efforts modeled the various objectives they considered in an ad hoc fashion. In contrast, in this paper, we propose a general framework for capturing such objectives in the process of generating adversarial inputs. Our framework builds on recent work in generative adversarial networks (GANs) [25] to train an attack generator, i.e., a neural network that can generate successful attack instances that meet certain objectives. Moreover, our framework is not only general, but, unlike previous attacks, produces a large number of diverse adversarial examples that meet the desired objectives. This could be leveraged by an attacker to generate attacks that are unlike previous ones (and hence more likely to succeed), but also by defenders to generate labeled negative inputs to augment training of their classifiers. Due to our framework's basis in GANs, we refer to it using the anagram AGNs, for adversarial generative nets.To illustrate the utility of AGNs, we return to the task of printing eyeglasses to fool facerecognition systems [68] and demonstrate how to accommodate a number of types of objectives within it. Specifically, we use AGNs to accommodate robustness objectives to ensure...