We consider whether ergodic Markov chains with bounded step size remain bounded in probability when their transitions are modified by an adversary on a bounded subset. We provide counterexamples to show that the answer is no in general, and prove theorems to show that the answer is yes under various additional assumptions. We then use our results to prove convergence of various adaptive Markov chain Monte Carlo algorithms.1. Introduction. This paper considers whether bounded modifications of stable Markov chains remain stable. Specifically, we let P be a fixed time-homogeneous ergodic Markov chain kernel with bounded step size, and let {X n } be a stochastic process which follows the transition probabilities P except on a bounded subset K where an "adversary" can make arbitrary bounded jumps. Under what conditions must such a process {X n } be bounded in probability?One might think that such boundedness would follow easily, at least under mild regularity and continuity assumptions, that is, that modifying a stable continuous Markov chain inside a bounded set K couldn't possibly lead to unstable behavior out in the tails. In fact the situation is rather more subtle, as we explore herein. We will provide counterexamples to show that boundedness may fail even for well-behaved continuous chains. We will then show that under various additional conditions, including bounds on transition probabilities and/or small set assumptions and/or geometric ergodicity, such boundedness does hold.