We consider Bandits with Knapsacks (henceforth, BwK), a general model for multi-armed bandits under supply/budget constraints. In particular, a bandit algorithm needs to solve a well-known knapsack problem: find an optimal packing of items into a limited-size knapsack. The BwK problem is a common generalization of numerous motivating examples, which range from dynamic pricing to repeated auctions to dynamic ad allocation to network routing and scheduling. While the prior work on BwK focused on the stochastic version, we pioneer the other extreme in which the outcomes can be chosen adversarially. This is a considerably harder problem, compared to both the stochastic version and the "classic" adversarial bandits, in that regret minimization is no longer feasible. Instead, the objective is to minimize the competitive ratio: the ratio of the benchmark reward to algorithm's reward.We design an algorithm with competitive ratio O(log T ) relative to the best fixed distribution over actions, where T is the time horizon; we also prove a matching lower bound. The key conceptual contribution is a new perspective on the stochastic version of the problem. We suggest a new algorithm for the stochastic version, which builds on the framework of regret minimization in repeated games and admits a substantially simpler analysis compared to prior work. We then analyze this algorithm for the adversarial version, and use it as a subroutine to solve the latter.Our algorithm is the first "black-box reduction" from bandits to BwK: it takes an arbitrary bandit algorithm and uses it as a subroutine. We use this reduction to derive several extensions. * An extended abstract is published in FOCS 2019: 60th Annual IEEE Symposium on Foundations of Computer Science. The conference version corresponds (as an extended abstract) to the March'19 version of this manuscript. Since then, we have improved the approximation ratios in Section 5 and Section 6, reducing the dependence on d and shaving off some constant factors. In particular, we streamlined some looseness in the algorithm in Section 5, and made the final computation somewhat more efficient. Also, we made the lower bound statements more explicit, and expanded the discussion of open questions.