Finding a maximal independent set (MIS) in a graph is a cornerstone task in distributed computing. The local nature of an MIS allows for fast solutions in a static distributed setting, which are logarithmic in the number of nodes or in their degrees [Luby 1986, Ghaffari 2015. By running a (static) distributed MIS algorithm after a topology change occurs, one can easily obtain a solution with the same complexity also for the dynamic distributed model, in which edges or nodes may be inserted or deleted.In this paper, we take a different approach which exploits locality to the extreme, and show how to update an MIS in a dynamic distributed setting, either synchronous or asynchronous, with only a single adjustment, meaning that a single node changes its output, and in a single round, in expectation. These strong guarantees hold for the complete fully dynamic setting: we handle all cases of insertions and deletions, of edges as well as nodes, gracefully and abruptly. This strongly separates the static and dynamic distributed models, as super-constant lower bounds exist for computing an MIS in the former.We prove that for any deterministic algorithm, there is a topology change that requires n adjustments, thus we also strongly separate deterministic and randomized solutions.Our results are obtained by a novel analysis of the surprisingly simple solution of carefully simulating the greedy sequential MIS algorithm with a random ordering of the nodes. As such, our algorithm has a direct application as a 3-approximation algorithm for correlation clustering. This adds to the important toolbox of distributed graph decompositions, which are widely used as crucial building blocks in distributed computing.Finally, our algorithm enjoys a useful history-independence property, which means that the distribution of the output structure depends only on the current graph, and does not depend on the history of topology changes that constructed that graph. This means that the output cannot be chosen, or even biased, by the adversary, in case its goal is to prevent us from optimizing some objective function. Moreover, history independent algorithms compose nicely, which allows us to obtain history independent coloring and matching algorithms, using standard reductions.