Combining scanning electron microscopy with serial slicing by a focused ion beam yields spatial image data of materials structures at the nanometer scale. However, the depth of field of the scanning electron microscopic images causes unwanted effects when highly porous structures are imaged. Proper spatial reconstruction of such porous structures from the stack of microscopic images is a tough and in general yet unsolved segmentation problem. Recently, machine learning methods have proven to yield solutions to a variety of image segmentation problems. However, their use is hindered by the need of large amounts of annotated data in the training phase. Here, we therefore replace annotated real image data by simulated image stacks of synthetic structures-realizations of stochastic germ-grain models and random packings. This strategy yields the annotations for free, but shifts the effort to choosing appropriate stochastic geometry models and generating sufficiently realistic scanning electron microscopic images.
The formation index of filter paper is one of the most important characteristics used in industrial quality control. Its estimation is often based on subjective comparison chart rating or, more objective, on the power spectrum of the paper structure observed on a transmission light table. It is shown that paper formation can be modeled by means of Gaussian random fields with a well-defined class of correlation functions, and a formation index which is derived from the density of the Bartlett spectrum estimated from image data: the mean of the Bessel transform of the correlation function taken for wave lengths between 2 and 5 mm. Furthermore, it is shown that a considerable variation of the local grammage can be observed also in cases where the the fibers are uniformly and independently scattered in the paper sheet.
A new method is presented for estimating the specific fiber length from 3D images of macroscopically homogeneous fiber systems. The method is based on a discrete version of the Crofton formula, where local knowledge from 3 × 3 × 3-pixel configurations of the image data is exploited. It is shown that the relative error resulting from the discretization of the outer integral of the Crofton formula amounts at most 1.2 %. An algorithmic implementation of the method is simple and the runtime as well as the amount of memory space are low. The estimation is significantly improved by considering 3 × 3 × 3-pixel configurations instead of 2 × 2 × 2, as already studied in literature.
In image processing, the amount of data to be processed grows rapidly, in particular when dealing with images of more than two dimensions or time series of images. Thus, efficient processing is a challenge, as data sizes may push even supercomputers to their limits. Quantum image processing promises to encode images with logarithmically less qubits than classical pixels in the image. In theory, this is a huge progress, but so far not many experiments have been conducted in practice, in particular on real backends. Often, the precise conversion of classical data to quantum states, the exact implementation, and the interpretation of the measurements in the classical context are challenging. We investigate these practical questions in this paper. In particular, we study the feasibility of the flexible representation of quantum images (FRQI). Furthermore, we check experimentally the limit in the current noisy intermediate-scale quantum era, i.e., up to which image size an image can be encoded, both on simulators and on real backends. Finally, we propose a method for simplifying the circuits needed for the FRQI. With our alteration, the number of gates can be reduced, especially the one of the error-prone controlled-NOT gates. As a consequence, the size of manageable images increases.
Edges are image locations where the gray value intensity changes suddenly. They are among the most important features to understand and segment an image. Edge detection is a standard task in digital image processing, solved, for example, using filtering techniques. However, the amount of data to be processed grows rapidly and pushes even supercomputers to their limits. Quantum computing promises exponentially lower memory usage in terms of the number of qubits compared to the number of classical bits. In this paper, we propose a hybrid method for quantum edge detection based on the idea of a quantum artificial neuron. Our method can be practically implemented on quantum computers, especially on those of the current noisy intermediate-scale quantum era. We compare six variants of the method to reduce the number of circuits and thus the time required for the quantum edge detection. Taking advantage of the scalability of our method, we can practically detect edges in images considerably larger than reached before.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.