Dingwell JB, Smallwood RF, Cusumano JP. Trial-to-trial dynamics and learning in a generalized, redundant reaching task. J Neurophysiol 109: 225-237, 2013. First published October 10, 2012 doi:10.1152/jn.00951.2011.-If humans exploit task redundancies as a general strategy, they should do so even if the redundancy is decoupled from the physical implementation of the task itself. Here, we derived a family of goal functions that explicitly defined infinite possible redundancies between distance (D) and time (T) for unidirectional reaching. All [T, D] combinations satisfying any specific goal function defined a goal-equivalent manifold (GEM). We tested how humans learned two such functions, D/T ϭ c (constant speed) and D·T ϭ c, that were very different but could both be achieved by neurophysiologically and biomechanically similar reaching movements. Subjects were never explicitly shown either relationship, but only instructed to minimize their errors. Subjects exhibited significant learning and consolidation of learning for both tasks. Initial error magnitudes were higher, but learning rates were faster, for the D·T task than for the D/T task. Learning the D/T task first facilitated subsequent learning of the D·T task. Conversely, learning the D·T task first interfered with subsequent learning of the D/T task. Analyses of trial-to-trial dynamics demonstrated that subjects actively corrected deviations perpendicular to each GEM faster than deviations along each GEM to the same degree for both tasks, despite exhibiting significantly greater variance ratios for the D/T task. Variance measures alone failed to capture critical features of trial-to-trial control. Humans actively exploited these abstract task redundancies, even though they did not have to. They did not use readily available alternative strategies that could have achieved the same performance.