Successful planning and control of robots strongly depends on the quality of kinematic models, which define mappings between configuration space (e.g. joint angles) and task space (e.g. Cartesian coordinates of the end effector). Often these models are predefined, in which case, for example, unforeseen bodily changes may result in unpredictable behavior. We are interested in a learning approach that can adapt to such changes-be they due to motor or sensory failures, or also due to the flexible extension of the robot body by, for example, the usage of tools. We focus on learning locally linear forward velocity kinematics models by means of the neuro-evolution approach XCSF. The algorithm learns self-supervised, executing movements autonomously by means of goal-babbling. It preserves actuator redundancies, which can be exploited during movement execution to fulfill current task constraints. For detailed evaluation purposes, we study the performance of XCSF when learning to control an anthropomorphic seven degrees of freedom arm in simulation. We show that XCSF can learn large forward velocity kinematic mappings autonomously and rather independently of the task space representation provided. The resulting mapping is highly suitable to resolve redundancies on the fly during inverse, goal-directed control.