Visual motion analysis is fundamental to survival across the animal kingdom. In insects, our understanding of the underlying computations has centered on the Hassenstein-Reichardt motion detector, which computes two-point cross-correlation via multiplication; in mammalian cortex, it is postulated that a similar signal is computed by comparing matched squaring operations. Both of these operations are difficult to implement biophysically in a precise fashion; moreover, they fail to detect the more complex multipoint local motion cues present in the visual environment. Here, via single-unit recordings in two visual specialists, dragonfly "(Odonata)" and macaque, and via model simulations, we show that neuronal computations are not simply approximations to idealized behaviors forced by biological constraints, but rather, are signatures of a common computational strategy to capture multiple local motion cues. The similarity of motion computations at the neuronal level in the brains of two extremely dissimilar animals, with evolutionary divergence of over 700 Myr 1 , suggests convergence on a common computational scheme for detecting visual motion.