Self-adjustment of parameters can signi cantly improve the performance of evolutionary algorithms. A notable example is the (1 + ( , )) genetic algorithm, where adaptation of the population size helps to achieve the linear running time on the OneMax problem. However, on problems which interfere with the assumptions behind the self-adjustment procedure, its usage can lead to the performance degradation. In particular, this is the case with the "one-h rule" on problems with weak tness-distance correlation. We propose a modi cation of the "one-h rule" in order to have less negative impact on the performance in the cases where the original rule is destructive. Our modi cation, while still yielding a provable linear runtime on OneMax, shows be er results on linear function with random weights, as well as on random satis able MAX-3SAT problems.