We propose Neural Crossbreed, a feed-forward neural network that can learn a semantic change of input images in a latent space to create the morphing effect. Because the network learns a semantic change, a sequence of meaningful intermediate images can be generated without requiring the user to specify explicit correspondences. In addition, the semantic change learning makes it possible to perform the morphing between the images that contain objects with significantly different poses or camera views. Furthermore, just as in conventional morphing techniques, our morphing network can handle shape and appearance transitions separately by disentangling the content and the style transfer for rich usability. We prepare a training dataset for morphing using a pre-trained BigGAN, which generates an intermediate image by interpolating two latent vectors at an intended morphing value. This is the first attempt to address image morphing using a pre-trained generative model in order to learn semantic transformation. The experiments show that Neural Crossbreed produces high quality morphed images, overcoming various limitations associated with conventional approaches. In addition, Neural Crossbreed can be further extended for diverse applications such as multi-image morphing, appearance transfer, and video frame interpolation.
Depth estimation is an important computer vision problem with many practical applications to mobile devices. While many solutions have been proposed for this task, they are usually very computationally expensive and thus are not applicable for on-device inference. To address this problem, we introduce the first Mobile AI challenge, where the target is to develop an end-to-end deep learning-based depth estimation solutions that can demonstrate a nearly realtime performance on smartphones and IoT platforms. For this, the participants were provided with a new large-scale dataset containing RGB-depth image pairs obtained with a dedicated stereo ZED camera producing high-resolution depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the popular Raspberry Pi 4 platform with a mobile ARM-based Broadcom chipset. The proposed solutions can generate VGA resolution depth maps at up to 10 FPS on the Raspberry Pi 4 while achieving high fidelity results, and are compatible with any Android or Linux-based mobile devices. A detailed description of all models developed in the challenge is provided in this paper.
Traditional approaches to retarget existing facial blendshape animations to other characters rely heavily on manually paired data including corresponding anchors, expressions, or semantic parametrizations to preserve the characteristics of the original performance. In this paper, inspired by recent developments in face swapping and reenactment, we propose a novel unsupervised learning method that reformulates the retargeting of 3D facial blendshape‐based animations in the image domain. The expressions of a source model is transferred to a target model via the rendered images of the source animation. For this purpose, a reenactment network is trained with the rendered images of various expressions created by the source and target models in a shared latent space. The use of shared latent space enable an automatic cross‐mapping obviating the need for manual pairing. Next, a blendshape prediction network is used to extract the blendshape weights from the translated image to complete the retargeting of the animation onto a 3D target model. Our method allows for fully unsupervised retargeting of facial expressions between models of different configurations, and once trained, is suitable for automatic real‐time applications.
We propose a deep learning-based method that can estimate an appropriate lighting of both indoor and outdoor images. The method consists of two networks: Crop-to-PanoLDR network and LDR-to-HDR network. The Crop-to-PanoLDR network predicts a low dynamic range (LDR) environment map from a single partially observed normal field of view image, and the LDR-to-HDR network transforms the predicted LDR image into a high dynamic range (HDR) environment map which includes the high intensity light information. The HDR environment map generated through this process is applied when rendering virtual objects in the given image. The direction of the estimated light along with ambient light illuminating the virtual object is examined to verify the effectiveness of the proposed method. For this, the results from our method are compared with those from the methods that consider either indoor images or outdoor images only. In addition, the effect of the loss function, which plays the role of classifying images into indoor or outdoor was tested and verified. Finally, a user test was conducted to compare the quality of the environment map created in this study with those created by existing research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.