Figure 1: We partition the original image (left) into a grid mesh and deform it to fit the new desired dimensions (right), such that the quad faces covering important image regions are optimized to scale uniformly while regions with homogeneous content are allowed to be distorted. The scaling and stretching of the image content is guided by a significance map which combines the gradient and the saliency maps. AbstractWe present a "scale-and-stretch" warping method that allows resizing images into arbitrary aspect ratios while preserving visually prominent features. The method operates by iteratively computing optimal local scaling factors for each local region and updating a warped image that matches these scaling factors as closely as possible. The amount of deformation of the image content is guided by a significance map that characterizes the visual attractiveness of each pixel; this significance map is computed automatically using a novel combination of gradient and salience-based measures. Our technique allows diverting the distortion due to resizing to image regions with homogeneous content, such that the impact on perceptually important features is minimized. Unlike previous approaches, our method distributes the distortion in all spatial directions, even when the resizing operation is only applied horizontally or vertically, thus fully utilizing the available homogeneous regions to absorb the distortion. We develop an efficient formulation for the nonlinear optimization involved in the warping function computation, allowing interactive image resizing.
Curve-skeleton is a very useful 1D structure to abstract the geometry and topology of a 3D object. Extraction of curve-skeletons is a fundamental problem in computer graphics, visualization, image processing and computer vision.There many useful applications including virtual colonoscopies, collision detection, computer animation, surface reconstruction and shape matching etc. In the literature [1][2], most previous methods require a volumetric discrete representation of the input model. However, transforming them into volumetric representations may raise discretization error in both geometry and connectivity.In this work [3], we propose a novel technique to extract skeletons directly from the mesh domain without requirement of volumetric discretization. Our approach (Figure 1) consists of three main steps: 1) mesh contraction, 2) connectivity surgery and 3) centeredness refinement. First, we contract a given mesh into a zero-volume skeletal shape by applying an iterative Laplacian smoothing procedure [4] with global positional constraints. Second, we execute a connectivity surgery procedure to progressively convert the contracted mesh into a 1D curve skeleton. Finally, to ensure its centeredness within the mesh, we refine the skeleton by moving each skeletal node to the center of its corresponding mesh region. In contrast to previous work, our approach has the following advantages: 1) our extracted skeleton is ensured to be homotopic to the original object, 2) it is inherently robust against noise disturbance (see Figure 2) and avoids volumetric discretization errors, and 3) the method is very fast and it is rotation invariant, and pose insensitive (Figure 3).
Figure 1: We partition the original image (left) into a grid mesh and deform it to fit the new desired dimensions (right), such that the quad faces covering important image regions are optimized to scale uniformly while regions with homogeneous content are allowed to be distorted. The scaling and stretching of the image content is guided by a significance map which combines the gradient and the saliency maps. AbstractWe present a "scale-and-stretch" warping method that allows resizing images into arbitrary aspect ratios while preserving visually prominent features. The method operates by iteratively computing optimal local scaling factors for each local region and updating a warped image that matches these scaling factors as closely as possible. The amount of deformation of the image content is guided by a significance map that characterizes the visual attractiveness of each pixel; this significance map is computed automatically using a novel combination of gradient and salience-based measures. Our technique allows diverting the distortion due to resizing to image regions with homogeneous content, such that the impact on perceptually important features is minimized. Unlike previous approaches, our method distributes the distortion in all spatial directions, even when the resizing operation is only applied horizontally or vertically, thus fully utilizing the available homogeneous regions to absorb the distortion. We develop an efficient formulation for the nonlinear optimization involved in the warping function computation, allowing interactive image resizing.
Curve-skeleton is a very useful 1D structure to abstract the geometry and topology of a 3D object. Extraction of curve-skeletons is a fundamental problem in computer graphics, visualization, image processing and computer vision.There many useful applications including virtual colonoscopies, collision detection, computer animation, surface reconstruction and shape matching etc. In the literature [1][2], most previous methods require a volumetric discrete representation of the input model. However, transforming them into volumetric representations may raise discretization error in both geometry and connectivity.In this work [3], we propose a novel technique to extract skeletons directly from the mesh domain without requirement of volumetric discretization. Our approach (Figure 1) consists of three main steps: 1) mesh contraction, 2) connectivity surgery and 3) centeredness refinement. First, we contract a given mesh into a zero-volume skeletal shape by applying an iterative Laplacian smoothing procedure [4] with global positional constraints. Second, we execute a connectivity surgery procedure to progressively convert the contracted mesh into a 1D curve skeleton. Finally, to ensure its centeredness within the mesh, we refine the skeleton by moving each skeletal node to the center of its corresponding mesh region. In contrast to previous work, our approach has the following advantages: 1) our extracted skeleton is ensured to be homotopic to the original object, 2) it is inherently robust against noise disturbance (see Figure 2) and avoids volumetric discretization errors, and 3) the method is very fast and it is rotation invariant, and pose insensitive (Figure 3).
Abstract-Image retargeting is the process of adapting images to fit displays with various aspect ratios and sizes. Most studies on image retargeting focus on shape preservation, but they do not fully consider the preservation of structure lines, which are sensitive to human visual system. In this paper, a patch-based retargeting scheme with an extended significance measurement is introduced to preserve shapes of both visually salient objects and structure lines while minimizing visual distortions. In the proposed scheme, a similarity transformation constraint is used to force visually salient content to undergo as-rigid-as-possible deformation, while an optimization process is performed to smoothly propagate distortions. These processes enable our approach to yield pleasing content-aware warping and retargeting. Experimental results and a user study show that our results are better than those generated by state-of-the-art approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.