Vection refers to the illusion of self-motion when a significant portion of the visual field is stimulated by visual flow, while body is still. Vection is known to be strong for peripheral vision stimulation and relatively weak for central vision. In this paper, the results of an experimental study of central linear vection with and without vibrotactile feet stimulation are presented. Three types of vibratory stimuli were used: a sinusoidal signal, pink noise, and a chirp signal. Six subjects faced a screen looking at a looming visual flow that suggested virtual forward motion. The results showed that the sensation of self-motion happened faster and its intensity was the strongest for sinusoidal vibrations at constant frequency. For some subjects, a vibrotactile stimulus with an increasing frequency (a chirp) elicited as well a stronger vection. The strength of sensation of self-motion was the lowest in the cases when pink noise vibrations and no vibrotactile stimulation accompanied the visual flow stimulation. Possible application areas are mentioned.
Understanding how we spontaneously scan the visual world through eye movements is crucial for characterizing both the strategies and inputs of vision. Despite the importance of the third or depth dimension for perception and action, little is known about how the specifically three-dimensional aspects of scenes affect looking behavior. Here we show that three-dimensional surface orientation has a surprisingly large effect on spontaneous exploration, and we demonstrate that a simple rule predicts eye movements given surface orientation in three dimensions: saccades tend to follow surface depth gradients. The rule proves to be quite robust: it generalizes across depth cues, holds in the presence or absence of a task, and applies to more complex three-dimensional objects. These results not only lead to a more accurate understanding of visuo-motor strategies, but also suggest a possible new oculomotor technique for studying three-dimensional vision from a variety of depth cues in subjects--such as animals or human infants--that cannot explicitly report their perceptions.
Abstract. Smile is one of the key elements in identifying emotions and present state of mind of an individual. In this work, we propose a cluster of approaches to classify posed and spontaneous smiles using deep convolutional neural network (CNN) face features, local phase quantization (LPQ), dense optical flow and histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for micro-expression smile amplification along with three normalization procedures for distinguishing posed and spontaneous smiles. Although the deep CNN face model is trained with large number of face images, HOG features outperforms this model for overall face smile classification task. Using EVM to amplify micro-expressions did not have a significant impact on classification accuracy, while the normalizing facial features improved classification accuracy. Unlike many manual or semi-automatic methodologies, our approach aims to automatically classify all smiles into either 'spontaneous' or 'posed' categories, by using support vector machines (SVM). Experimental results on large UvA-NEMO smile database show promising results as compared to other relevant methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.