Fast and reliable physically-based simulation techniques are essential for providing flexible visual effects for computer graphics content. In this paper, we propose a fast and reliable hierarchical cloth simulation method, which combines conventional physically-based simulation with deep neural networks (DNN). Simulations of the coarsest level of the hierarchical model are calculated using conventional physically-based simulations, and more detailed levels are generated by inference using DNN models. We demonstrate that our method generates reliable and fast cloth simulation results through experiments under various conditions.
In this paper, we propose a two-step temporal interpolation network using forward advection to generate smoke simulation efficiently. By converting a low frame rate smoke simulation computed with a large time step into a high frame rate smoke simulation through inference of temporal interpolation networks, the proposed method can efficiently generate smoke simulation with a high frame rate and low computational costs. The first step of the proposed method is optical flow-based temporal interpolation using deep neural networks (DNNs) for two given smoke animation frames. In the next step, we compute temporary smoke frames with forward advection, a physical computation with a low computational cost. We then interpolate between the results of the forward advection and those of the first step to generate more accurate and enhanced interpolated results. We performed quantitative analyses of the results generated by the proposed method and previous temporal interpolation methods. Furthermore, we experimentally compared the performance of the proposed method with previous methods using DNNs for smoke simulation. We found that the results generated by the proposed method are more accurate and closer to the ground truth smoke simulation than those generated by the previous temporal interpolation methods. We also confirmed that the proposed method generates smoke simulation results more efficiently with lower computational costs than previous smoke simulation methods using DNNs.
We propose a deep neural network model that recognizes the position and velocity of a fast-moving object in a video sequence and predicts the object’s future motion. When filming a fast-moving subject using a regular camera rather than a super-high-speed camera, there is often severe motion blur, making it difficult to recognize the exact location and speed of the object in the video. Additionally, because the fast moving object usually moves rapidly out of the camera’s field of view, the number of captured frames used as input for future-motion predictions should be minimized. Our model can capture a short video sequence of two frames with a high-speed moving object as input, use motion blur as additional information to recognize the position and velocity of the object, and predict the video frame containing the future motion of the object. Experiments show that our model has significantly better performance than existing future-frame prediction models in determining the future position and velocity of an object in two physical scenarios where a fast-moving two-dimensional object appears.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.