The intima-media thickness (IMT) of the common carotid artery is a widely used clinical marker of severe cardiovascular diseases. IMT is usually manually measured on longitudinal B-mode ultrasound images. Many computer-based techniques for IMT measurement have been proposed to overcome the limits of manual segmentation. Most of these, however, require a certain degree of user interaction. In this paper we describe a new, completely automated layer extraction technique (named CALEXia) for the segmentation and IMT measurement of the carotid wall in ultrasound images. CALEXia is based on an integrated approach consisting of feature extraction, line fitting, and classification that enables the automated tracing of the carotid adventitial walls. IMT is then measured by relying on a fuzzy K-means classifier. We tested CALEXia on a database of 200 images. We compared CALEXia?s performance with those of a previously developed methodology that was based on signal analysis (CULEXsa). Three trained operators manually segmented the images and the average profiles were considered as the ground truth. The average error from CALEXia for lumen-intima (LI) and media- adventitia (MA) interface tracings were 1.46 +/- 1.51 pixel (0.091 +/- 0.093 mm) and 0.40 +/- 0.87 pixel (0.025 +/- 0.055 mm), respectively. The corresponding errors for CULEXsa were 0.55 +/- 0.51 pixels (0.035 +/- 0.032 mm) and 0.59 +/- 0.46 pixels (0.037 +/- 0.029 mm). The IMT measurement error was equal to 0.87 +/- 0.56 pixel (0.054 +/- 0.035 mm) for CALEXia and 0.12 +/- 0.14 pixel (0.01 +/- 0.01 mm) for CULEXsa. Thus, CALEXia showed limited performance in segmenting the LI interface, but outperformed CULEXsa in the MA interface and in the number of images correctly processed (190 for CALEXia and 184 for CULEXsa). Based upon two complementary strategies, we anticipate fusing them for further IMT improvements.
The aim of this paper is to describe a novel and completely automated technique for carotid artery (CA) recognition, far (distal) wall segmentation, and intima-media thickness (IMT) measurement, which is a strong clinical tool for risk assessment for cardiovascular diseases. The architecture of completely automated multiresolution edge snapper (CAMES) consists of the following two stages: 1) automated CA recognition based on a combination of scale-space and statistical classification in a multiresolution framework and 2) automated segmentation of lumen-intima (LI) and media-adventitia (MA) interfaces for the far (distal) wall and IMT measurement. Our database of 365 B-mode longitudinal carotid images is taken from four different institutions covering different ethnic backgrounds. The ground-truth (GT) database was the average manual segmentation from three clinical experts. The mean distance ± standard deviation of CAMES with respect to GT profiles for LI and MA interfaces were 0.081 ± 0.099 and 0.082 ± 0.197 mm, respectively. The IMT measurement error between CAMES and GT was 0.078 ± 0.112 mm. CAMES was benchmarked against a previously developed automated technique based on an integrated approach using feature-based extraction and classifier (CALEX). Although CAMES underestimated the IMT value, it had shown a strong improvement in segmentation errors against CALEX for LI and MA interfaces by 8% and 42%, respectively. The overall IMT measurement bias for CAMES improved by 36% against CALEX. Finally, this paper demonstrated that the figure-of-merit of CAMES was 95.8% compared with 87.4% for CALEX. The combination of multiresolution CA recognition and far-wall segmentation led to an automated, low-complexity, real-time, and accurate technique for carotid IMT measurement. Validation on a multiethnic/multi-institutional data set demonstrated the robustness of the technique, which can constitute a clinically valid IMT measurement for assistance in atherosclerosis disease management.
The mean distance errors +/- SD using this integrated approach were 1.05 +/- 1.04 pixels (0.07 +/- 0.07 mm) for proximal or near adventitia and 2.68 +/- 3.94 pixels (0.17 +/- 0.24 mm) for distal or far adventitia. Sixteen of 200 images were not perfectly traced because of the presence of both plaques and blood backscattering. The computational cost ensures the possibility for near real-time detection. Conclusions. Although the CALEXia algorithm automatically detects the CCA, it is also robust and validated over a large database. This can constitute a general basis for a completely automated segmentation procedure widely applicable to other anatomies.
Summary• Minirhizotrons provide detailed information on the production, life history and mortality of fine roots. However, manual processing of minirhizotron images is timeconsuming, limiting the number and size of experiments that can reasonably be analysed. Previously, an algorithm was developed to automatically detect and measure individual roots in minirhizotron images. Here, species-specific root classifiers were developed to discriminate detected roots from bright background artifacts.• Classifiers were developed from training images of peach (Prunus persica), freeman maple (Acer × freemanii) and sweetbay magnolia (Magnolia virginiana) using the Adaboost algorithm. True-and false-positive rates for classifiers were estimated using receiver operating characteristic curves.• Classifiers gave true positive rates of 89-94% and false positive rates of 3-7% when applied to nontraining images of the species for which they were developed. The application of a classifier trained on one species to images from another species resulted in little or no reduction in accuracy.• These results suggest that a single root classifier can be used to distinguish roots from background objects across multiple minirhizotron experiments. By incorporating root detection and discrimination algorithms into an open-source minirhizotron image analysis application, many analysis tasks that are currently performed by hand can be automated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.