In this paper the challenge of parallelization development of industrial high performance inspection systems is addressed concerning a conventional parallelization approach versus an auto-parallelized technique. Therefore, we introduce the functional array processing language Single Assignment C (SaC), which relies on a hardware virtualization concept for automated, parallel machine code generation for multicore CPUs and GPUs. Additional, software engineering aspects like programmability, productivity, understandability, maintainability and resulting achieved gain in performance are discussed from the point of view of a developer. With several illustrative benchmarking examples from the field of image processing and machine learning, the relationship between runtime performance and efficiency of development is analyzed.
Source code comments contain key information about the underlying software system. Many redocumentation approaches, however, cannot exploit this valuable source of information. This is mainly due to the fact that not all comments have the same goals and target audience and can therefore only be used selectively for redocumentation. Performing a required classification manually, for example, in the form of heuristics, is usually time‐consuming and error‐prone and strongly dependent on programming languages and guidelines of concrete software systems. By leveraging machine learning (ML), it should be possible to classify comments and thus transfer valuable information from the source code into documentation with less effort but the same quality. We applied classical ML techniques but also deep learning (DL) approaches to legacy systems by transferring source code comments into meaningful representations using, for example, word embeddings but also novel approaches using quick response codes or a special character‐to‐image encoding. The results were compared with industry‐strength heuristic classification. As a result, we found that ML outperforms the heuristics in number of errors and less effort, that is, we finally achieve an accuracy of more than 95% for an image‐based DL network and even over 96% for a traditional approach using a random forest classifier.
Time-of-flight (TOF) full-field range cameras use a correlative imaging technique to generate three-dimensional measurements of the environment. Though reliable and cheap they have the disadvantage of high measurement noise and errors that limit the practical use of these cameras in industrial applications. We show how some of these limitations can be overcome with standard image processing techniques specially adapted to TOF camera data. Additional information in the multimodal images recorded in this setting, and not available in standard image processing settings, can be used to improve reduction of measurement noise. Three extensions of standard techniques, wavelet thresholding, adaptive smoothing on a clustering based image segmentation, and an extended anisotropic diffusion filtering, make use of this information and are compared on synthetic data and on data acquired from two different off-the-shelf TOF cameras. Of these methods, the adapted anisotropic diffusion technique gives best results, and is implementable to perform in real time using current graphics processing unit (GPU) hardware. Like traditional anisotropic diffusion, it requires some parameter adaptation to the scene characteristics, but allows for low visualization delay and improved visualization of moving objects by avoiding long averaging periods when compared to traditional TOF image denoising.
This paper proposes a novel approach to determine the texture periodicity, the texture element size and further characteristics like the area of the basin of attraction in the case of computing the similarity of a test image patch with a reference. The presented method utilizes the properties of a novel metric, the so-called discrepancy norm. Due to the Lipschitz and the monotonicity property the discrepancy norm distinguishes itself from other metrics by well-formed and stable convergence regions. Both the periodicity and the convergence regions are closely related and have an immediate impact on the performance of a subsequent template matching and evaluation step.The general form of the proposed approach relies on the generation of discrepancy norm induced similarity maps at random positions in the image. By applying standard image processing operations like watershed and blob analysis on the similarity maps a robust estimation of the characteristic periodicity can be computed. From the general approach a tailored version for orthogonal aligned textures is derived which shows robustness to noise disturbed images and is suitable for estimation on near regular textures. In an experimental set-up the estimation performance is tested on samples of standardized image databases and is compared with state-of-theart methods. Results show that the proposed method is applicable to a wide range of nearly regular textures and estimation results keeps up with current methods. When adding a hypothesis generation/selection mechanism it even outperforms the current state-or-the-art.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.