If face images are degraded by block averaging, there is a nonlinear decline in recognition accuracy as block size increases, suggesting that identification requires a critical minimum range of object spatial frequencies. The identification of faces was measured with equivalent Fourier low-pass filtering and block averaging preserving the same information and with high-pass transformations. In Experiment 1, accuracy declined and response time increased in a significant nonlinear manner in all cases as the spatial-frequency range was reduced. However, it did so at a faster rate for the quantized and high-passed images. A second experiment controlled for the differences in the contrast of the high-pass faces and found a reduced but significant and nonlinear decline in performance as the spatial-frequency range was reduced. These data suggest that face identification is preferentially supported by a band of spatial frequencies of approximately 8-16 cycles per face; contrast or line-based explanations were found to be inadequate. The data are discussed in terms of current models of face identification.The questions of whether the information concerning the identity offaces is carried by a limited range ofspatial scales and whether the potential information from different regions ofthe spatial spectrum is given equal weight in the determination of identity have been approached in a number of different ways. One method ofconsidering these issues has been to make use ofspatial-frequencyfilteringtechniques (Harmon, 1973). However, variations in this method have produced contradictory results, with notably differentconclusions about the relative importance of different spatial-frequency bands specified in terms ofcycles per face. The term cycles perface is defmed as the number of sinusoidal repetitions of a given width that can be placed within the eye-level width of the face. The use ofthis metric to describe the information present in stimuli allows discussion ofthe degree of detail necessary for recognition, perhaps by defining the scale of facial configuration. A class ofobjects has a configuration if there is a consistent set of features all arranged in the same order. Thus, if a set ofexamples are superimposed, normalizing for scale and viewpoint, another example of the class is produced that is closer to the prototype. Clearly, faces have this property, since all have two eyes, a nose, and a mouth-and these are consistently arranged.Harmon ( can be seen in Figure 2. The images are formed by placing a regular square grid across the image and setting the pixel value at each grid square to the average gray level within it. This work suggested that the minimum image quality that allows effective identification corresponds to a 16 X 16 pixel image; however, since the images did not take up the whole of the screen, the number of pixels per face was slightly lower. Harmon also used a smooth low-pass filtering technique. This type of filtering operation does not introduce additional spatial frequencies (noise), as the pix...
Micro-facial expressions are spontaneous, involuntary movements of the face when a person experiences an emotion but attempts to hide their facial expression, most likely in a high-stakes environment. Recently, research in this field has grown in popularity, however publicly available datasets of micro-expressions have limitations due to the difficulty of naturally inducing spontaneous micro-expressions. Other issues include lighting, low resolution and low participant diversity. We present a newly developed spontaneous micro-facial movement dataset with diverse participants and coded using the Facial Action Coding System. The experimental protocol addresses the limitations of previous datasets, including eliciting emotional responses from stimuli tailored to each participant. Dataset evaluation was completed by running preliminary experiments to classify micro-movements from non-movements. Results were obtained using a selection of spatio-temporal descriptors and machine learning. We further evaluate the dataset on emerging methods of feature difference analysis and propose an Adaptive Baseline Threshold that uses individualised neutral expression to improve the performance of micro-movement detection. In contrast to machine learning approaches, we outperform the state of the art with a recall of 0.91. The outcomes show the dataset can become a new standard for micro-movement data, with future work expanding on data representation and analysis.
To understand the functional significance of skeletal muscle anatomy, a method of quantifying local shape changes in different tissue structures during dynamic tasks is required. Taking advantage of the good spatial and temporal resolution of B-mode ultrasound imaging, we describe a method of automatically segmenting images into fascicle and aponeurosis regions and tracking movement of features, independently, in localized portions of each tissue. Ultrasound images (25 Hz) of the medial gastrocnemius muscle were collected from eight participants during ankle joint rotation (2° and 20°), isometric contractions (1, 5, and 50 Nm), and deep knee bends. A Kanade-Lucas-Tomasi feature tracker was used to identify and track any distinctive and persistent features within the image sequences. A velocity field representation of local movement was then found and subdivided between fascicle and aponeurosis regions using segmentations from a multiresolution active shape model (ASM). Movement in each region was quantified by interpolating the effect of the fields on a set of probes. ASM segmentation results were compared with hand-labeled data, while aponeurosis and fascicle movement were compared with results from a previously documented cross-correlation approach. ASM provided good image segmentations (<1 mm average error), with fully automatic initialization possible in sequences from seven participants. Feature tracking provided similar length change results to the cross-correlation approach for small movements, while outperforming it in larger movements. The proposed method provides the potential to distinguish between active and passive changes in muscle shape and model strain distributions during different movements/conditions and quantify nonhomogeneous strain along aponeuroses.
We address the problem of tracking in vivo muscle fascicle shape and length changes using ultrasound video sequences. Quantifying fascicle behaviour is required to improve understanding of the functional significance of a muscle's geometric properties. Ultrasound imaging provides a non-invasive means of capturing information on fascicle behaviour during dynamic movements, to date however computational approaches to assess such images are limited. Our approach to the problem is novel because we permit fascicles to take up non-linear shape configurations. We achieve this using a Bayesian tracking framework that is: i) robust, conditioning shape estimates on the entire history of image observations; and ii) flexible, enforcing only a very weak Gaussian Process shape prior that requires fascicles to be locally smooth. The method allows us to track and quantify fascicle behaviour in vivo during a range of movements, providing insight into dynamic changes in muscle geometric properties which may be linked to patterns of activation and intramuscular forces and pressures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.