Face reenactment aims to animate a source face image to a different pose and expression provided by a driving image.Existing approaches are either designed for a specific identity, or suffer from the identity preservation problem in the one-shot or few-shot scenarios. In this paper, we introduce a method for one-shot face reenactment, which uses the reconstructed 3D meshes (i.e., the source mesh and driving mesh) as guidance to learn the optical flow needed for the reenacted face synthesis. Technically, we explicitly exclude the driving face's identity information in the reconstructed driving mesh. In this way, our network can focus on the motion estimation for the source face without the interference of driving face shape. We propose a motion net to learn the face motion, which is an asymmetric autoencoder. The encoder is a graph convolutional network (GCN) that learns a latent motion vector from the meshes,
In this paper, fermented solid beef inoculated with two starter cultures (Lactobacillus curvatus and Pediococcus pentosaceus) in different inoculums concentration, fermentation time, and fermentation temperature was evaluated for change of protein composition (water‐soluble proteins, insoluble protein, and salt‐soluble proteins) and nonprotein nitrogen during fermentation and the relevance with textural properties was also investigated. The amount of water‐soluble and salt‐soluble proteins decreased while nonprotein and insoluble proteins increased as fermentation progressed. With the increase of inoculums concentration from 0.5% to 2.5%, the amount of water‐soluble and salt‐soluble proteins decreased from 23.8 to 19.6 mg/g and 51.2 to 33.5 mg/g, respectively, and the amount of nonprotein and insoluble proteins increased from 17.8 to 24.4 mg/g and 20.3 to 49.5 mg/g, respectively. During 0 to 32 hr of fermentation, the water‐soluble and salt‐soluble proteins decreased from 24.4 to16.0 mg/g and 54.0 to 22.0 mg/g, respectively, and the amount of nonprotein and insoluble proteins increased from 16.3 to 30.8 mg/g and 19.6 to 70.5 mg/g, respectively. With the fermentation temperature increased from 20°C to 40°C, the water‐soluble decreased from 34.6 to 21.8 mg/g; the amount of salt‐soluble proteins decreased from 60.0 to 21.2 mg/g; the nonprotein and insoluble proteins increased from 16.2 to 24.3 mg/g and 29.7 to 70.4 mg/g, respectively. From SDS‐PAGE analysis of water‐soluble and salt‐soluble proteins, solid beef protein tended to degrade to smaller protein, peptide, and amino acid with protease and peptidases enzymes, and partially form large polymers. Hardness, elasticity, gumminess, and chewiness were significantly negatively correlated with water‐soluble and salt‐soluble proteins, and were significantly positively correlated with insoluble proteins. The optimum inoculum population, fermentation time, and fermentation temperature were 2%, 32 hr, and 35°C, respectively. Practical applications Our paper studied the effects of fermentation conditions including inoculums concentration, fermentation temperature, and fermentation time on the change of protein composition of fermented solid beef and nonprotein nitrogen. Meanwhile, explored the relationship between the protein composition and texture properties. Fermentation degrades proteins into smaller protein, peptides, and free amino acids. It is beneficial for absorption and good for health. And now the fermentation meat products become more and more popular. However, the quality of beef products is related to muscle proteins that the changes of protein molecules can be effectively affected the properties of products. Our preliminary observation provides a theoretical basis for further researches on quality evaluation of fermentation meat products, and hope to improve the quality of fermentation meat products.
The paper proposes a novel generative adversarial network for one-shot face reenactment, which can animate a single face image to a different pose-and-expression (provided by a driving image) while keeping its original appearance. The core of our network is a novel mechanism called appearance adaptive normalization, which can effectively integrate the appearance information from the input image into our face generator by modulating the feature maps of the generator using the learned adaptive parameters. Furthermore, we specially design a local net to reenact the local facial components (i.e., eyes, nose and mouth) first, which is a much easier task for the network to learn and can in turn provide explicit anchors to guide our face generator to learn the global appearance and pose-and-expression. Extensive quantitative and qualitative experiments demonstrate the significant efficacy of our model compared with prior one-shot methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.