The most common representation in evolutionary computation are bit strings. This is ideal to model binary decision variables, but less useful for variables taking more values. With very little theoretical work existing on how to use evolutionary algorithms for such optimization problems, we study the run time of simple evolutionary algorithms on some OneMax-like functions defined over Ω = {0, 1, . . . , r − 1}n . More precisely, we regard a variety of problem classes requesting the component-wise minimization of the distance to an unknown target vector z ∈ Ω.For such problems we see a crucial difference in how we extend the standard-bit mutation operator to these multi-valued domains. While it is natural to select each position of the solution vector to be changed independently with probability 1/n, there are various ways to then change such a position. If we change each selected position to a random value different from the original one, we obtain an expected run time of Θ(nr log n). If we change each selected position by either +1 or −1 (random choice), the optimization time reduces to Θ(nr + n log n). If we use a random mutation strength i ∈ {0, 1, . . . , r − 1} n with probability inversely proportional to i and change the selected position by either +i or −i (random choice), then the optimization time becomes Θ(n log(r)(log(n) + log(r))), bringing down the dependence on r from linear to polylogarithmic.One of our results depends on a new variant of the lower bounding multiplicative drift theorem.