Many events that humans and other species experience contain regularities in which certain elements within an event predict certain others. While some of these regularities involve tracking the co‐occurrences between temporally adjacent stimuli, others involve tracking the co‐occurrences between temporally distant stimuli (i.e., nonadjacent dependencies, NADs). Prior research shows robust learning of adjacent dependencies in humans and other species, whereas learning NADs is more difficult, and often requires support from properties of the stimulus to help learners notice the NADs. Here, we report on seven experiments that examined NAD learning from various types of visual stimuli, exploring the effects of dynamic motion on adults' NAD learning from visual sequences involving human and nonhuman agents. We tested adults' NAD learning from visual sequences of human actions, object transformations, static images of human postures, and static images of an object in different postures. We found that dynamic motion aids the acquisition of NADs. We also found that learning NADs in sequences involving human agents is more robust compared to sequences involving nonhuman objects. We propose that dynamic motion and human agents both independently result in richer representations that provide a stronger signal for NAD learning.