If biological-motion point-light displays are presented upside down, adequate perception is strongly impaired. Reminiscent of the inversion effect in face recognition, it has been suggested that the inversion effect in biological motion is due to impaired configural processing in a highly trained expert system. Here, we present data that are incompatible with this view. We show that observers can readily retrieve information about direction from scrambled point-light displays of humans and animals. Even though all configural information is entirely disrupted, perception of these displays is still subject to a significant inversion effect. Inverting only parts of the display reveals that the information about direction, as well as the associated inversion effect, is entirely carried by the local motion of the feet. We interpret our findings in terms of a visual filter that is tuned to the characteristic motion of the limbs of an animal in locomotion and hypothesize that this mechanism serves as a general detection system for the presence of articulated terrestrial animals.
In the present study, we examined if young infants can extract information regarding the directionality of biological motion. We report that 6-month-old infants can differentiate leftward and rightward motions from a movie depicting the sagittal view of an upright human point-light walker, walking as if on a treadmill. Inversion of the stimuli resulted in no detection of directionality. These findings suggest that biological motion displays convey information for young infants beyond that which distinguishes them from nonbiological motion; aspects of the action itself are also detected. The potential visual mechanisms underlying biological motion detection, as well as the behavioral interpretations of point-light figures, are discussed.The movements of animate agents offer an abundance of information to the adult human observer; we can extract information regarding species classification, gender, attractiveness, and emotion from animate motion. Intriguingly, we do so even when the motion is depicted in simple pointlight displays conveying the movement of the major joints of the body in the absence of key morphological features (e.g., faces, skin, and hair)
We present ZeroEGGS, a neural network framework for speech-driven gesture generation with zero-shot style control by example. This means style can be controlled via only a short example motion clip, even for motion styles unseen during training. Our model uses a Variational framework to learn a style embedding, making it easy to modify style through latent space manipulation or blending and scaling of style embeddings. The probabilistic nature of our framework further enables the generation of a variety of outputs given the input, addressing the stochastic nature of gesture motion. In a series of experiments, we first demonstrate the flexibility and generalizability of our model to new speakers and styles. In a user study, we then show that our model outperforms previous state-of-the-art techniques in naturalness of motion, appropriateness for speech, and style portrayal. Finally, we release a high-quality dataset of full-body gesture motion including fingers, with speech, spanning across 19 different styles. Our code and data are publicly available at https:// github.com/ ubisoft/ ubisoft-laforge-ZeroEGGS.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.