Inferring a high dynamic range (HDR) image from a single low dynamic range (LDR) input is an ill-posed problem where we must compensate lost data caused by under-/over-exposure and color quantization. To tackle this, we propose the first deep-learning-based approach for fully automatic inference using convolutional neural networks. Because a naive way of directly inferring a 32-bit HDR image from an 8-bit LDR image is intractable due to the difficulty of training, we take an indirect approach; the key idea of our method is to synthesize LDR images taken with different exposures (i.e., bracketed images ) based on supervised learning, and then reconstruct an HDR image by merging them. By learning the relative changes of pixel values due to increased/decreased exposures using 3D deconvolutional networks, our method can reproduce not only natural tones without introducing visible noise but also the colors of saturated pixels. We demonstrate the effectiveness of our method by comparing our results not only with those of conventional methods but also with ground-truth HDR images.
a) (b) (c) Figure 1. (a) Mesh models. (b) Making papercraft toys with a computer. (c) Papercraft toys of the mesh models. AbstractWe propose a new method for producing unfolded papercraft patterns of rounded toy animal figures from triangulated meshes by means of strip-based approximation. Although in principle a triangulated model can be unfolded simply by retaining as much as possible of its connectivity while checking for intersecting triangles in the unfolded plane, creating a pattern with tens of thousands of triangles is unrealistic. Our approach is to approximate the mesh model by a set of continuous triangle strips with no internal vertices. Initially, we subdivide our mesh into parts corresponding to the features of the model. We segment each part into zonal regions, grouping triangles which are similar topological distances from the part boundary. We generate triangle strips by simplifying the mesh while retaining the borders of the zonal regions and additional cut-lines. The pattern is then created simply by unfolding the set of strips. The distinguishing feature of our method is that we approximate a mesh model by a set of continuous strips, not by other ruled surfaces such as parts of cones or cylinders. Thus, the approximated unfolded pattern can be generated using only mesh operations and a simple unfolding algorithm. Furthermore, a set of strips can be crafted just by bending the paper (without breaking edges) and can represent smooth features of the original mesh models.
Figure 1: Left: Arbitrary 3D model of IKEA ALVE cabinet downloaded from Google 3D Warehouse. Middle: Fabricatable parts and connectors generated by our algorithm. Right: We built a real cabinet based on the structure and dimensions of the generated parts/connectors. AbstractAlthough there is an abundance of 3D models available, most of them exist only in virtual simulation and are not immediately usable as physical objects in the real world. We solve the problem of taking as input a 3D model of a man-made object, and automatically generating the parts and connectors needed to build the corresponding physical object. We focus on furniture models, and we define formal grammars for IKEA cabinets and tables. We perform lexical analysis to identify the primitive parts of the 3D model. Structural analysis then gives structural information to these parts, and generates the connectors (i.e. nails, screws) needed to attach the parts together. We demonstrate our approach with arbitrary 3D models of cabinets and tables available online.
We propose 2D stick figures as aunified medium for visualizing and searching for human motion data. The stick figures can express awide rangeorhuman motion, and theyare easy to be drawn by people without any professional training.Inour interface,the user can browse overall motion by viewing the stick figureimagesgenerated from the database and retrieve them directly by using sketched stick figures as an input query.W es tarted with apreliminary surveytoobserve how people draw stick figures. Based on the rules observed from the user study, we developed an algorithm converting motion data to asequence of stick figures. The feature-based comparison method between the stick figures provides an interactive and progressive search for the users. Theyassist the user's sketching by showing the current retrieval result at eachs troke. We demonstrate the utility of the system with a user study,inwhichthe participants retrieved example motion segments from the database with 102 motion files by using our interface.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.