Industrial knitting machines are commonly used to manufacture complicated shapes from yarns; however, designing patterns for these machines requires extensive training. We present the first general visual programming interface for creating 3D objects with complex surface finishes on industrial knitting machines. At the core of our interface is a new, augmented, version of the stitch mesh data structure. The augmented stitch mesh stores low-level knitting operations per-face and encodes the dependencies between faces using directed edge labels. Our system can generate knittable augmented stitch meshes from 3D models, allows users to edit these meshes in a way that preserves their knittability, and can schedule the execution order and location of each face for production on a knitting machine. Our system is general, in that its knittability-preserving editing operations are sufficient to transform between any two machine-knittable stitch patterns with the same orientation on the same surface. We demonstrate the power and flexibility of our pipeline by using it to create and knit objects featuring a wide range of patterns and textures, including intarsia and Fair Isle colorwork; knit and purl textures; cable patterns; and laces.
Figure 1: Our compiler processes high-level primitives into low-level instructions for production on industrial knitting machines.
We present the first computational approach that can transform three-dimensional (3D) meshes, created by traditional modeling programs, directly into instructions for a computer-controlled knitting machine. Knitting machines are able to robustly and repeatably form knitted 3D surfaces from yarn but have many constraints on what they can fabricate. Given user-defined starting and ending points on an input mesh, our system incrementally builds a helix-free, quad-dominant mesh with uniform edge lengths, runs a tracing procedure over this mesh to generate a knitting path, and schedules the knitting instructions for this path in a way that is compatible with machine constraints. We demonstrate our approach on a wide range of 3D meshes.
Figure 1: By modeling user behavior and not thresholding transitions, we create a high overall quality on-line motion generator suitable for directly controlled characters. Left, a screen capture. Middle, users control the character with a gamepad. Right, characters must respond immediately to user input, lest they run afoul of environmental hazards. AbstractIn game environments, animated character motion must rapidly adapt to changes in player input -for example, if a directional signal from the player's gamepad is not incorporated into the character's trajectory immediately, the character may blithely run off a ledge. Traditional schemes for data-driven character animation lack the split-second reactivity required for this direct control; while they can be made to work, motion artifacts will result. We describe an on-line character animation controller that assembles a motion stream from short motion fragments, choosing each fragment based on current player input and the previous fragment. By adding a simple model of player behavior we are able to improve an existing reinforcement learning method for precalculating good fragment choices. We demonstrate the efficacy of our model by comparing the animation selected by our new controller to that selected by existing methods and to the optimal selection, given knowledge of the entire path. This comparison is performed over real-world data collected from a game prototype. Finally, we provide results indicating that occasional low-quality transitions between motion segments are crucial to high-quality on-line motion generation; this is an important result for others crafting animation systems for directly-controlled characters, as it argues against the common practice of transition thresholding.
We present an image editing program which allows artists to paint in the gradient domain with real-time feedback on megapixelsized images. Along with a pedestrian, though powerful, gradientpainting brush and gradient-clone tool, we introduce an edge brush designed for edge selection and replay. These brushes, coupled with special blending modes, allow users to accomplish global lighting and contrast adjustments using only local image manipulationse.g. strengthening a given edge or removing a shadow boundary. Such operations would be tedious in a conventional intensity-based paint program and hard for users to get right in the gradient domain without real-time feedback. The core of our paint program is a simple-to-implement GPU multigrid method which allows integration of megapixel-sized full-color gradient fields at over 20 frames per second on modest hardware. By way of evaluation, we present example images produced with our program and characterize the iteration time and convergence rate of our integration method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.