Perceptual learning, a form of adult brain plasticity that produces improved discrimination, has been studied in various tasks and senses. However, it is unknown whether and how this improved discrimination alters stimulus appearance. Here, in addition to a discrimination task, we used an estimation task to investigate how training affects stimulus appearance in human adults. Before and after training, observers were shown stimuli composed of dots moving slightly clockwise or counter-clockwise horizontal, whose appearance has been shown to be biased away from horizontal. Observers were subdivided into three groups: Those who (1) trained in a discrimination task; (2) trained in an estimation task; (3) did not train. Training improved discrimination accuracy and decreased coherence thresholds. Counterintuitively, training also distorted appearance, substantially exacerbating estimation biases. These changes occurred in both training groups (but not in the no-training control group), suggesting a common learning mechanism. We developed a computational observer model that simulates performance on both discrimination and estimation tasks. The model incorporates three components: (1) the internal representation favors cardinal motion directions, which are most common in the natural environment; (2) in the estimation task, observers implicitly categorize motion, conditioning their estimates on this; and (3) both types of training induce an increase in the precision of representation of trained motions. We find that the simulations of the model, fit to individual observer data, can account for their improved discrimination and increased estimation bias. We conclude that perceptual learning improves discrimination while simultaneously distorting appearance.