A key question in perception research is how stimulus variations translate into perceptual magnitudes, that is, the perceptual
encoding
process. As experimenters, we cannot probe perceptual magnitudes directly, but infer the encoding process from responses obtained in a psychophysical experiment. The most prominent experimental technique to measure perceptual appearance is matching, where observers adjust a probe stimulus to match a target in its appearance along the dimension of interest. The resulting data quantify the perceived magnitude of the target in physical units of the probe, and are thus an indirect expression of the underlying encoding process. In this paper, we show analytically and in simulation that data from matching tasks do not sufficiently constrain perceptual encoding functions, because there exist an infinite number of pairs of encoding functions that generate the same matching data. We use simulation to demonstrate that maximum likelihood conjoint measurement (Ho, Landy, & Maloney, 2008; Knoblauch & Maloney, 2012) does an excellent job of recovering the shape of ground truth encoding functions from data that were generated with these very functions. Finally, we measure perceptual scales and matching data for White’s effect (White, 1979) and show that the matching data can be predicted from the estimated encoding functions, down to individual differences.