Figure 1: Intersecting 2D silhouettes: The silhouettes on the left were used to automatically generate the 3D model on the right. Note that the line drawings are not projections of the 3D model, but rather the input that generates the model. AbstractWe present a new sketch-based modeling approach in which models are interactively designed by drawing their 2D silhouettes from different views. The core idea of our paper is to limit the input to 2D silhouettes, removing the need to explicitly create or position 3D elements. Arbitrarily complex models can be constructed by assembling them out of parts defined by their silhouettes, which can be combined using CSG operations. We introduce a new simplified algorithm to compute CSG solids that leverages special properties of silhouette cylinders to convert the 3D CSG problem into one that can be handled entirely with 2D operations, making implementation simpler and more robust. We evaluate our approach by modeling a random sampling of man-made objects taken from the words in WordNet, and show that all of the tested man-made objects can be modeled quickly and easily using our approach.
a) Target 3D model (b) Guidance projected onto material (c) Sculpted physical replica Figure 1: We assist users in creating physical objects that match digital 3D models. Given a target 3D model (a), we project different forms of guidance onto a work in progress (b) that indicate how it must be deformed to match the target model. As the user follows this guidance, the physical object's shape approaches that of the target (c). With our system, unskilled users are able to produce accurate physical replicas of complex 3D models. Here, we recreate the Stanford bunny model (courtesy of the Stanford Computer Graphics Laboratory) out of polymer clay. AbstractWe propose a method that allows an unskilled user to create an accurate physical replica of a digital 3D model. We use a projector/camera pair to scan a work in progress, and project multiple forms of guidance onto the object itself that indicate which areas need more material, which need less, and where any ridges, valleys or depth discontinuities are. The user adjusts the model using the guidance and iterates, making the shape of the physical object approach that of the target 3D model over time. We show how this approach can be used to create a duplicate of an existing object, by scanning the object and using that scan as the target shape. The user is free to make the reproduction at a different scale and out of different materials: we turn a toy car into cake. We extend the technique to support replicating a sequence of models to create stop-motion video. We demonstrate an end-to-end system in which real-world performance capture data is retargeted to claymation. Our approach allows users to easily and accurately create complex shapes, and naturally supports a large range of materials and model sizes.
Figure 1: Overview: (a): A position-correcting tool. The device consists of a frame and a tool (in this case a router) mounted within that frame. The frame is positioned manually by the user. A camera on the frame (top right in the figure) is used to determine the frame's location. The device can adjust the position of the tool within the frame to correct for error in the user's coarse positioning. (b): To follow a complex path, the user need only move the frame in a rough approximation of the path. In this example, the dotted blue line shows the path that the tool would take if its position were not adjusted; the black line is its actual path. (c): An example of a shape cut out of wood using this tool. AbstractMany kinds of digital fabrication are accomplished by precisely moving a tool along a digitally-specified path. This precise motion is typically accomplished fully automatically using a computercontrolled multi-axis stage. With that approach, one can only create objects smaller than the positioning stage, and large stages can be quite expensive. We propose a new approach to precise positioning of a tool that combines manual and automatic positioning: in our approach, the user coarsely positions a frame containing the tool in an approximation of the desired path, while the device tracks the frame's location and adjusts the position of the tool within the frame to correct the user's positioning error in real time. Because the automatic positioning need only cover the range of the human's positioning error, this frame can be small and inexpensive, and because the human has unlimited range, such a frame can be used to precisely position tools over an unlimited range.
Figure 1: Intersecting 2D silhouettes: The silhouettes on the left were used to automatically generate the 3D model on the right. Note that the line drawings are not projections of the 3D model, but rather the input that generates the model. AbstractWe present a new sketch-based modeling approach in which models are interactively designed by drawing their 2D silhouettes from different views. The core idea of our paper is to limit the input to 2D silhouettes, removing the need to explicitly create or position 3D elements. Arbitrarily complex models can be constructed by assembling them out of parts defined by their silhouettes, which can be combined using CSG operations. We introduce a new simplified algorithm to compute CSG solids that leverages special properties of silhouette cylinders to convert the 3D CSG problem into one that can be handled entirely with 2D operations, making implementation simpler and more robust. We evaluate our approach by modeling a random sampling of man-made objects taken from the words in WordNet, and show that all of the tested man-made objects can be modeled quickly and easily using our approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.