Efficiently obtaining a reliable coronary artery centerline from computed tomography angiography data is relevant in clinical practice. Whereas numerous methods have been presented for this purpose, up to now no standardized evaluation methodology has been published to reliably evaluate and compare the performance of the existing or newly developed coronary artery centerline extraction algorithms. This paper describes a standardized evaluation methodology and reference database for the quantitative evaluation of coronary artery centerline extraction algorithms. The contribution of this work is fourfold: 1) a method is described to create a consensus centerline with multiple observers, 2) well-defined measures are presented for the evaluation of coronary artery centerline extraction algorithms, 3) a database containing thirty-two cardiac CTA datasets with corresponding reference standard is described and made available, and 4) thirteen coronary artery centerline extraction algorithms, implemented by different research groups, are quantitatively evaluated and compared. The presented evaluation framework is made available to the medical imaging community for benchmarking existing or newly developed coronary centerline extraction algorithms.
SynopsisWe present a purely combinatorial procedure for finding an isolating neighbourhood and an index pair contained in a given set, being a finite union of cubes in Rs. It is applied for a computer-assisted computation of the Conley index of an isolated invariant subset of the Hénon attractor. As a corollary, it is shown that the Hénon attractor contains periodic orbits of all principal periods except for 3 and 5.
We present a simple method for compressing very large and regularly sampled scalar fields. Our method is particularlyattractive when the entire data set does not fit in memory and when the sampling rate is high relative to thefeature size of the scalar field in all dimensions. Although we report results for
and
data sets, the proposedapproach may be applied to higher dimensions. The method is based on the new Lorenzo predictor, introducedhere, which estimates the value of the scalar field at each sample from the values at processed neighbors. The predictedvalues are exact when the n‐dimensional scalar field is an implicit polynomial of degree
n− 1. Surprisingly,when the residuals (differences between the actual and predicted values) are encoded using arithmetic coding,the proposed method often outperforms wavelet compression in anL∞sense. The proposed approach may beused both for lossy and lossless compression and is well suited for out‐of‐core compression and decompression,because a trivial implementation, which sweeps through the data set reading it once, requires maintaining only asmall buffer in core memory, whose size barely exceeds a single (n−1)‐dimensional slice of the data.
Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Compression, scalar fields,out‐of‐core.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.