In this paper, for the first time, we investigate the considerably more time to draw) and costly process (reward 83 per sketch is higher). 84 Even with the dataset, the problem of professional sketch 85 to 3D shape generation is non-trivial. There are two unique 86 difficulties brought by professional sketch that need to be 87 addressed. First, sketches naturally exhibit figure/ground am-88 biguity, i.e., the same sketch can lead to different 3D shape 89 interpretations (Fig. 2(a) offers examples). This ambiguity is 90 largely caused by foreground being hard to distinguish relying 91 solely on a few lines in absence of colour and texture. Second, 92 despite the sketches are professionally produced, misalignment 93 still exists between sketch and 3D shape as a result of different 94 drawing skills and styles of the artists. 95 As the second contribution, we design a deep adversarial 96 network to specifically tackle these challenges, where (i) a 97 discrimination-attention mechanism is developed to help fig-98 ure/ground estimation; we re-purpose the self-attention mech-99 anism in [15] and introduce a novel attention loss to ensure 100 that an automatically generated attention map aligns with that 101 of the ground-truth 3D shape, and (ii) in order to tackle the 102 inherent sketch-3D shape misalignment, we learn a global non-103 linear geometric transformation between an input sketch and 104 its 3D shape counterpart via a spatial transform network. Our 105 method builds on a recent work [4], where an adversarial 106 learning strategy is adopted to train a conditional GAN, which 107 is able to generate 2D representations of surface normal and 108 depth maps describing a 3D shape from different viewpoints, 109 which can then be fused into a full 3D representation. 110 Our contribution is summarized as follows: 111 • We contribute the first professional sketch and 3D shape 112 dataset, that contains 500 3D shapes and 1,500 profes-113 sional sketches, to drive future research.
In this paper we emphasize the importance of unique certified one-time key pairs in Buyer-Seller Watermarking (BSW) protocols. We distinguish between reactive unbinding attacks, in which the seller reacts to illicit file sharing by fabricating further evidence of such activity, and pre-emptive unbinding attacks, in which the seller gains an advantage by taking action that pre-empts the file being shared. We demonstrate the importance of certified one-time key pairs in the BSW protocol by Lei et al., for protecting against pre-emptive unbinding attacks, and subsequently reveal a new attack on a recently published BSW protocol due to its omission of unique key pairs.
Product designers extensively use sketches to create and communicate 3D shapes and thus form an ideal audience for sketch-based modeling, non-photorealistic rendering and sketch filtering. However, sketching requires significant expertise and time, making design sketches a scarce resource for the research community. We introduce OpenSketch , a dataset of product design sketches aimed at offering a rich source of information for a variety of computer-aided design tasks. OpenSketch contains more than 400 sketches representing 12 man-made objects drawn by 7 to 15 product designers of varying expertise. We provided participants with front, side and top views of these objects, and instructed them to draw from two novel perspective viewpoints. This drawing task forces designers to construct the shape from their mental vision rather than directly copy what they see. They achieve this task by employing a variety of sketching techniques and methods not observed in prior datasets. Together with industrial design teachers, we distilled a taxonomy of line types and used it to label each stroke of the 214 sketches drawn from one of the two viewpoints. While some of these lines have long been known in computer graphics, others remain to be reproduced algorithmically or exploited for shape inference. In addition, we also asked participants to produce clean presentation drawings from each of their sketches, resulting in aligned pairs of drawings of different styles. Finally, we registered each sketch to its reference 3D model by annotating sparse correspondences. We provide an analysis of our annotated sketches, which reveals systematic drawing strategies over time and shapes, as well as a positive correlation between presence of construction lines and accuracy. Our sketches, in combination with provided annotations, form challenging benchmarks for existing algorithms as well as a great source of inspiration for future developments. We illustrate the versatility of our data by using it to test a 3D reconstruction deep network trained on synthetic drawings, as well as to train a filtering network to convert concept sketches into presentation drawings. We distribute our dataset under the Creative Commons CC0 license: https://ns.inria.fr/d3/OpenSketch.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.