Traditional methods in graphics to simulate liquid-air dynamics under different scenarios usually employ separate approaches with sophisticated interface tracking/reconstruction techniques. In this paper, we propose a novel unified approach which is easy and effective to produce a variety of liquid-air interface phenomena. These phenomena, such as complex surface splashes, bubble interactions, as well as surface tension effects, can co-exist in one single simulation, and are created within the same computational framework. Such a framework is unique in that it is free from any complicated interface tracking/reconstruction procedures. Our approach is developed from the two-phase lattice Boltzmann method with the mean field model, which provides a unified framework for interface dynamics but is numerically unstable under turbulent conditions. Considering the drawbacks of the existing approaches, we propose techniques to suppress oscillations for significant stability enhancement, as well as derive a new subgrid-scale model to further improve stability, faithfully preserving liquid-air interface details without excessive diffusion by taking into account the density variation. The whole framework is highly parallel, enabling very efficient implementation. Comparisons with the related approaches show superiority on stable simulations with detail preservation and multiphase phenomena simultaneously involved. A set of animation results demonstrate the effectiveness of our method.
Laser-Induced Breakdown Spectroscopy (LIBS) is a popular technique for elemental quantitative analysis in chemistry community, based on which, various methods are developed to determinate the concentrations of chemical samples. Despite the successful applications of the existing methods, they still struggle to obtain accurate samples analyses, due to their limited prediction capability, the complex compositions of samples and mutual interference of elements. In this paper, we propose a novel heterogeneous stacking ensemble learning model called Heterogeneous stACKing Ensemble Model LIBS (Hackem-LIBS) to achieve LIBS quantitative analysis with higher accuracy. Specifically, we propose a stacking ensemble learning framework consisting two stages. In the first stage, we train different heterogeneous component learners with multiple sub-training sets and pick out the optimal learners. In the second stage, we leverage the enhanced features predicted by the selected learners to train a stronger metalearner, which is used to make the final prediction. In addition, we combine Genetic Algorithm (GA) with Sequential Forward Selection (SFS) to reduce the redundancy of training features, which ensures more effective learning and higher computation efficiency. Extensive experiments on two public benchmarks are conducted and the results show that our approach achieves better accuracy in determinating the concentrations of elements and is practically applicable to the quantitative analysis of complex chemical samples via the LIBS technique.
Person Image Synthesis aims at transferring the appearance of the source person image into a target pose. Existing methods cannot handle large pose variations and therefore suffer from two critical problems, 1) synthesis distortion due to the entanglement of pose and appearance information among different body components; and 2) failure in preserving original semantics ( e.g. , the same outfit). In this paper, we explicitly address these two problems by proposing a Pose and Attribute Consistent Person Image Synthesis Network (PAC-GAN). To reduce pose and appearance matching ambiguity, we propose a component-wise transferring model consisting of two stages. The former stage focuses only on synthesizing target poses, while the latter renders target appearances by explicitly transferring the appearance information from the source image to the target image in a component-wise manner. In this way, source-target matching ambiguity is eliminated due to the component-wise disentanglement of pose and appearance synthesis. Second, to maintain attribute consistency, we represent the input image as an attribute vector and impose a high-level semantic constraint using this vector to regularize the target synthesis. Extensive experimental results on the DeepFashion dataset demonstrate the superiority of our method over the state-of-the-arts, especially for maintaining pose and attribute consistencies under large pose variations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.