In this paper we present a new method for generating humanoid robot movements. We propose to merge the intuitiveness of the widely used key-frame technique with the optimization provided by automatic learning algorithms. Key-frame approaches are straightforward but require the user to precisely define the position of each robot joint, a very time consuming task. Automatic learning strategies can search for a good combination of parameters resulting in an effective motion of the robot without requiring user effort. On the other hand their search usually cannot be easily driven by the operator and the results can hardly be modified manually. While the fitness function gives a quantitative evaluation of the motion (e.g. "How far the robot moved?"), it cannot provide a qualitative evaluation, for instance the similarity to the human movements. In the proposed technique the user, exploiting the key-frame approach, can intuitively bound the search by specifying relationships to be maintained between the joints and by giving a range of possible values for easily understandable parameters. The automatic learning algorithm then performs a local exploration of the parameter space inside the defined bounds. Thanks to the clear meaning of the parameters provided by the user, s/he can give qualitative evaluation of the generated motion (e.g. "This walking gait looks odd. Let's raise the knee more") and easily introduce new constraints to the motion. Experimental results proved the approach to be successful in terms of reduction of motion-development time, in terms of natural appearance of the motion, and in terms of stability of the walking.