We propose a new blind watermarking algorithm for three-dimensional (3D) printed objects that has applications in metadata embedding, robotic grasping, counterfeit prevention, and crime investigation. Our method can be used on fused deposition modeling (FDM) 3D printers and works by modifying the printed layer thickness on small patches of the surface of an object. These patches can be applied to multiple regions of the object, thereby making it resistant to various attacks such as cropping, local deformation, local surface degradation, or printing errors. The novelties of our method are the use of the thickness of printed layers as a one-dimensional carrier signal to embed data, the minimization of distortion by only modifying the layers locally, and one-shot detection using a common paper scanner. To correct encoding or decoding errors, our method combines multiple patches and uses a two-dimensional (2D) parity check to estimate the error probability of each bit to obtain a higher correction rate than a naive majority vote. The parity bits included in the patches have a double purpose because, in addition to error detection, they are also used to identify the orientation of the patches. In our experiments, we successfully embedded a watermark into flat surfaces of 3D objects with various filament colors using a standard FDM 3D printer, extracted it using a common 2D paper scanner and evaluated the sensitivity to surface degradation and signal amplitude.
This paper describes a method for recovering appearance of inner slices of translucent objects. The outer appearance of translucent objects is a summation of the appearance of slices at all depths, where each slice is blurred by depth-dependent point spread functions (PSFs). By exploiting the difference of low-pass characteristics of depthdependent PSFs, we develop a multi-frequency illumination method for obtaining the appearance of individual inner slices using a coaxial projector-camera setup. Specifically, by measuring the target object with varying the spatial frequency of checker patterns emitted from a projector, our method recovers inner slices via a simple linear solution method. We quantitatively evaluate accuracy of the proposed method by simulations and show qualitative recovery results using real-world scenes.
This paper presents a material classification method using an off-the-shelf Time-of-Flight (ToF) camera. The proposed method is built upon a key observation that the depth measurement by a ToF camera is distorted for objects with certain materials, especially with translucent materials. We show that this distortion is due to the variation of time domain impulse responses across materials and also due to the measurement mechanism of the ToF cameras. Specifically, we reveal that the amount of distortion varies according to the modulation frequency of the ToF camera, the object material, and the distance between the camera and object. Our method uses the depth distortion of ToF measurements as a feature for classification and achieves material classification of a scene. Effectiveness of the proposed method is demonstrated by numerical evaluations and real-world experiments, showing its capability of material classification, even for visually indistinguishable objects.
This paper presents a method for recovering shape and normal of a transparent object from a single viewpoint using a Time-of-Flight (ToF) camera. Our method is built upon the fact that the speed of light varies with the refractive index of the medium and therefore the depth measurement of a transparent object with a ToF camera may be distorted. We show that, from this ToF distortion, the refractive light path can be uniquely determined by estimating a single parameter. We estimate this parameter by introducing a surface normal consistency between the one determined by a light path candidate and the other computed from the corresponding shape. The proposed method is evaluated by both simulation and real-world experiments and shows faithful transparent shape recovery.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.