Visual perception often fails to recover the veridical 3D shape of objects in the environment due to ambiguity and variability in the available depth cues. However, we rely heavily on 3D shape estimates when planning movements, for example reaching to pick up an object from a slanted surface. Given the wide variety of distortions that can affect 3D perception, how do our actions remain accurate across different environments? One hypothesis is that the visuomotor system performs selective filtering of 3D information to minimize distortions. Indeed, some studies have found that actions appear to preferentially target stereo information when it is put in conflict with texture information. However, since these studies analyze averages over multiple trials, this apparent preference could be produced by sensorimotor adaptation. In Experiment 1, we create a set of cue-conflict stimuli where one available depth cue is affected by a constant bias. Sensory feedback rapidly aligns the motor output with physical reality in just a few trials, which can make it seem as if action planning selectively relies on the reinforced cue—yet no change in the relative influences of the cues is necessary to eliminate the constant errors. In contrast, when one depth cue becomes less correlated with physical reality, variable movement errors will occur, causing canonical adaptation to fail as the opposite error corrections cancel out. As a result, canonical adaptation cannot explain the preference for stereo found in studies with variable errors. However, Experiment 2 shows that persistent errors can produce a novel form of adaptation that gradually reduces the relative influence of an unreliable depth cue. These findings show that grasp control processes are continuously modified based on sensory feedback to compensate for both biases and noise in 3D visual processing, rather than having a hardwired preference for one type of depth information.