During natural visual foraging, primates move their eyes 2-3 times per second to bring objects of interest to central, high-resolution vision at the fovea. For moving objects, they use a combination of rapid saccadic eye movements along with smooth following movements to track targets continuously. It is also known that saccadic eye movements produce perceptual enhancements for the saccade target before the eyes move, called pre-saccadic attention. Recently, in human participants, we found that saccades made to peripheral motion apertures resulted in smooth post-saccadic following that tracked stimulus motion at low gain (Kwon, Rolfs, & Mitchell, 2019). Because this effect persisted even when the stimulus disappeared in saccade flight, we can infer the post-saccadic following was predictive, reflecting the integration of peripheral motion information from the target before the saccade, and provides an automatic perceptual read-out of stimulus motion. Here we examined post-saccadic following in marmoset monkeys to determine if they automatically tracked stimulus motion like humans, and if so, if that following response could be used as a reliable behavioral read-out of motion. Marmosets performed a saccade foraging task in which they initially acquired central fixation and then made a saccade that sampled between three different motion apertures. For each trial, the direction of motion of each aperture was independently sampled from 16 directions. We found that immediately upon saccade offset, the marmosets eye traces followed the pre-saccadic motion with a low (10-20%) gain that was consistent with humans. We also found that the motion from other non-target apertures influenced following responses though with a much weaker gain. The gain was distributed equally across apertures before the saccade, but immediately after the saccade was enhanced for the saccade target relative to other apertures, consistent with a post-saccadic target enhancement found in smooth pursuit (Gardener and Lisberger, 2001). This following response provided an estimate of target motion with a median absolute angular errors ranging from 25 to 50 degrees across sessions, roughly half as accurate as that achieved with an explicit trained perceptual report (Cloherty et. al., 2020). Session by session the relative gain for the target as compared to other apertures also varied, providing an index of attentional selection. These findings support that natural visual foraging with moving targets can provide an automatic behavioral read-out of peripheral motion integration and pre-saccadic attention.