Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation kernels of finite size have been introduced. This paper compares 1) truncated and windowed sinc; 2) nearest neighbor; 3) linear; 4) quadratic; 5) cubic B-spline; 6) cubic; g) Lagrange; and 7) Gaussian interpolation and approximation techniques with kernel sizes from 1 x 1 up to 8 x 8. The comparison is done by: 1) spatial and Fourier analyses; 2) computational complexity as well as runtime evaluations; and 3) qualitative and quantitative interpolation error determinations for particular interpolation tasks which were taken from common situations in medical image processing. For local and Fourier analyses, a standardized notation is introduced and fundamental properties of interpolators are derived. Successful methods should be direct current (DC)-constant and interpolators rather than DC-inconstant or approximators. Each method's parameters are tuned with respect to those properties. This results in three novel kernels, which are introduced in this paper and proven to be within the best choices for medical image interpolation: the 6 x 6 Blackman-Harris windowed sinc interpolator, and the C2-continuous cubic kernels with N = 6 and N = 8 supporting points. For quantitative error evaluations, a set of 50 direct digital X rays was used. They have been selected arbitrarily from clinical routine. In general, large kernel sizes were found to be superior to small interpolation masks. Except for truncated sinc interpolators, all kernels with N = 6 or larger sizes perform significantly better than N = 2 or N = 3 point methods (p << 0.005). However, the differences within the group of large-sized kernels were not significant. Summarizing the results, the cubic 6 x 6 interpolator with continuous second derivatives, as defined in (24), can be recommended for most common interpolation tasks. It appears to be the fastest six-point kernel to implement computationally. It provides eminent local and Fourier properties, is easy to implement, and has only small errors. The same characteristics apply to B-spline interpolation, but the 6 x 6 cubic avoids the intrinsic border effects produced by the B-spline technique. However, the goal of this study was not to determine an overall best method, but to present a comprehensive catalogue of methods in a uniform terminology, to define general properties and requirements of local techniques, and to enable the reader to select that method which is optimal for his specific application in medical imaging.
S U M M A R YWe present a novel technique for the determination of resistivity structures associated with arbitrary surface topography. The approach represents a triple-grid inversion technique that is based on unstructured tetrahedral meshes and finite-element forward calculation. The three grids are characterized as follows: A relatively coarse parameter grid defines the elements whose resistivities are to be determined. On the secondary field grid the forward calculations in each inversion step are carried out using a secondary potential (SP) approach. The primary fields are provided by a one-time simulation on the highly refined primary field grid at the beginning of the inversion process.We use a Gauss-Newton method with inexact line search to fit the data within error bounds. A global regularization scheme using special smoothness constraints is applied. The regularization parameter compromising data misfit and model roughness is determined by an L-curve method and finally evaluated by the discrepancy principle. To solve the inverse subproblem efficiently, a least-squares solver is presented.We apply our technique to synthetic data from a burial mound to demonstrate its effectiveness. A resolution-dependent parametrization helps to keep the inverse problem small to cope with memory limitations of today's standard PCs. Furthermore, the SP calculation reduces the computation time significantly. This is a crucial issue since the forward calculation is generally very time consuming. Thus, the approach can be applied to large-scale 3-D problems as encountered in practice, which is finally proved on field data.As a by-product of the primary potential calculation we obtain a quantification of the topography effect and the corresponding geometric factors. The latter are used for calculation of apparent resistivities to prevent the reconstruction process from topography induced artefacts.
SUMMARY We present techniques for the efficient numerical computation of the electrical potential with finite element methods in 3‐D and arbitrary topography. The crucial innovation is, firstly, the incorporation of unstructured tetrahedral meshes, which allow for efficient local mesh refinement and most flexible description of arbitrary model geometry. Secondly, by implementation of quadratic shape functions we achieve considerably more accurate results. Exploiting a secondary potential (SP) approach, meshes are downsized significantly in comparison with highly refined meshes for total potential calculation. However, the latter is necessary for the determination of the required primary potential in arbitrary model domains. To start with, we concentrate on the simulation of homogeneous models with different geometries at the surface and subsurface to quantify their influence. This results in a so‐called geometry effect, which is not only a side effect but may be responsible for serious misinterpretations. Moreover, it represents the basis for treating heterogeneous conductivity models with the SP approach, which is especially promising for the inverse problem. We address how the resulting system of equations is solved most efficiently using modern multifrontal direct solvers in conjunction with reordering strategies or rather traditional pre‐conditioned conjugate gradient methods depending on the size of the problem. Furthermore, we present a reciprocity approach to estimate modelling errors and investigate to which degree the model discretization has to be refined to yield sufficiently accurate results.
Problem-based learning (PBL) is an established didactic approach in medical education worldwide. The impact of PBL depends on the tutors' quality and the students' motivation. To enhance students' motivation and satisfaction and to overcome the problems with the changing quality of tutors, online learning and face-to-face classes were systematically combined resulting in a blended learning scenario (blended problem-based learning--bPBL). The study aims at determining whether bPBL increases the students' motivation and supports the learning process with respect to the students' cooperation, their orientation, and more reliable tutoring. The blended PBL was developed in a cooperation of students and teachers. The well-established Seven Jump-scheme of PBL was carefully complemented by eLearning modules. On the first training day of bPBL the students start to work together with the online program, but without a tutor, on the final training day the tutor participates in the meeting for additional help and feedback. The program was evaluated by a mixed-method study. The traditional PBL course was compared with the blended PBL by means of qualitative and quantitative questionnaires, standardized group interviews, and students' test results. In addition log-files were analyzed. A total of 185 third-year students and 14 tutors took part in the study. Motivation, subjective learning gains and satisfaction achieved significantly higher ratings by the bPBL students compared with the students learning by traditional PBL. The tutors' opinion and the test results showed no differences between the groups. Working with the web-based learning environment was assessed as very good by the students. According to the log-file analysis, the web-based learning module was frequently used and improved the cooperation during the self-directed learning.
An accurate and efficient 3-D finite-difference forward algorithm for DC resistivity modelling is developed. The governing differential equations of the resistivity problem are discretized using central finite differences that are derived by a second-order Taylor series expansion. Electrical conductivity values may be arbitrarily distributed within the half-space. Conductivities at the grid points are calculated by a volume-weighted arithmetic average from conductivities assigned to grid cells. Variable grid spacing is incorporated. The algorithm does not limit the number and configuration of the sources, although all illustrative examples are computed using two current electrodes at the surface.In general, the linear set of equations resulting from this kind of discretization is non-symmetric and requires generalized numerical equation solvers. However, after symmetrizing the matrix equations, the ordinary conjugate gradient method becomes applicable. It takes advantage of the matrix symmetry and, thus, is superior to the generalized methods. An efficient SSOR-preconditioner (SSOR symmetric successive overrelaxation) provides fast convergence by decreasing the spectral condition number of the matrix without using additional memory. Furthermore, a compact storage scheme reduces memory requirements and accelerates mathematical matrix operations.The performance of five different equation solvers is investigated in terms of cpu time. The preconditioned conjugate gradient method (CGPC) is shown to be the most efficient matrix solver and is able to solve large equation systems in moderate times (approximately 24 minutes on a DEC alpha workstation for a grid with 50000 nodes, and 48 minutes for 200000 nodes). The importance of the tolerance value in the stopping criterion for the iteration process is pointed out. In order to investigate the accuracy, the numerical results are compared with analytical or other solutions for three different model classes, yielding maximum deviations of 3.5 per cent or much less for most of the computed values of the apparent resistivity.In conclusion, the presented algorithm provides a powerful and flexible tool for practical application in resistivity modelling.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.