The goal of many computational physicists and chemists is the ability to bridge the gap between atomistic length scales of about a few multiples of an Ångström (Å), i. e., 10−10 m, and meso- or macroscopic length scales by virtue of simulations. The same applies to timescales. Machine learning techniques appear to bring this goal into reach. This work applies the recently published on-the-fly machine-learned force field techniques using a variant of the Gaussian approximation potentials combined with Bayesian regression and molecular dynamics as efficiently implemented in the Vienna ab initio simulation package, VASP. The generation of these force fields follows active-learning schemes. We apply these force fields to simple oxides such as MgO and more complex reducible oxides such as iron oxide, examine their generalizability, and further increase complexity by studying water adsorption on these metal oxide surfaces. We successfully examined surface properties of pristine and reconstructed MgO and Fe3O4 surfaces. However, the accurate description of water–oxide interfaces by machine-learned force fields, especially for iron oxides, remains a field offering plenty of research opportunities.
Generating photorealistic facial animations is still a challenging task in computer graphics, and synthetically generated facial animations often do not meet the visual quality of captured video sequences. Video sequences on the other hand need to be captured prior to the animation stage and do not offer the same animation exibility as computer graphics models. We present an inexpensive method for video-based facial animation, which combines the photorealism of real videos with the exibility of CGI-based animation by extracting dynamic texture sequences from existing multi-view footage. To synthesize new facial performances, these texture sequences are concatenated in a motion-graph-like way. In order to ensure realistic appearance, we combine a warpbased optimization scheme with a modified cross dissolve to prevent visual artefacts during the transition between texture sequences. Our approach makes photorealistic facial re-animation from existing video footage possible , which is especially useful in applications like video editing or the animation of digital characters
In this paper, we present a system to capture and animate a highly realistic avatar model of a user in real-time. The animated human model consists of a rigged 3D mesh and a texture map. The system is based on KinectV2 input which captures the skeleton of the current pose of the subject in order to animate the human shape model. An additional high-resolution RGB camera is used to capture the face for updating the texture map on each frame. With this combination of image based rendering with computer graphics we achieve photo-realistic animations in real-time. Additionally, this approach is well suited for networked scenarios, because of the low per frame amount of data to animate the model, which consists of motion capture parameters and a video frame. With experimental results, we demonstrate the high degree of realism of the presented approach
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.