Epicardial adipose tissue volume (EAT) has been linked to coronary artery disease and the risk of major adverse cardiac events. As manual quantification of EAT is time-consuming, requires specialized training, and is prone to human error, we developed a deep learning method (DeepFat) for the automatic assessment of EAT on non-contrast low-dose CT calcium score images. Our DeepFat intuitively segmented the tissue enclosed by the pericardial sac on axial slices, using two preprocessing steps. First, we applied a HU-attention-window with a window/level 350/40-HU to draw attention to the sac and reduce numerical errors. Second, we applied a novel look ahead slab-of-slices with bisection (“bisect”) in which we split the heart into halves and sequenced the lower half from bottom-to-middle and the upper half from top-to-middle, thereby presenting an always increasing curvature of the sac to the network. EAT volume was obtained by thresholding voxels within the sac in the fat window (− 190/− 30-HU). Compared to manual segmentation, our algorithm gave excellent results with volume Dice = 88.52% ± 3.3, slice Dice = 87.70% ± 7.5, EAT error = 0.5% ± 8.1, and R = 98.52% (p < 0.001). HU-attention-window and bisect improved Dice volume scores by 0.49% and 3.2% absolute, respectively. Variability between analysts was comparable to variability with DeepFat. Results compared favorably to those of previous publications.
This paper proposes the multicolumn RBF network (MCRN) as a method to improve the accuracy and speed of a traditional radial basis function network (RBFN). The RBFN, as a fully connected artificial neural network (ANN), suffers from costly kernel inner-product calculations due to the use of many instances as the centers of hidden units. This issue is not critical for small datasets, as adding more hidden units will not burden the computation time. However, for larger datasets, the RBFN requires many hidden units with several kernel computations to generalize the problem. The MCRN mechanism is constructed based on dividing a dataset into smaller subsets using the k-d tree algorithm. resultant subsets are considered as separate training datasets to train individual RBFNs. Those small RBFNs are stacked in parallel and bulged into the MCRN structure during testing. The MCRN is considered as a well-developed and easy-to-use parallel structure, because each individual ANN has been trained on its own subsets and is completely separate from the other ANNs. This parallelized structure reduces the testing time compared with that of a single but larger RBFN, which cannot be easily parallelized due to its fully connected structure. Small informative subsets provide the MCRN with a regional experience to specify the problem instead of generalizing it. The MCRN has been tested on many benchmark datasets and has shown better accuracy and great improvements in training and testing times compared with a single RBFN. The MCRN also shows good results compared with those of some machine learning techniques, such as the support vector machine and k-nearest neighbors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.