The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.
Pulmonary hypertension (PH) can result in vascular pruning and increased tortuosity of the blood vessels. In this study we examined whether automatic extraction of lung vessels from contrast-enhanced thoracic computed tomography (CT) scans and calculation of tortuosity as well as 3D fractal dimension of the segmented lung vessels results in measures associated with PH.In this pilot study, 24 patients (18 with and 6 without PH) were examined with thorax CT following their diagnostic or follow-up right-sided heart catheterisation (RHC). Images of the whole thorax were acquired with a 128-slice dual-energy CT scanner. After lung identification, a vessel enhancement filter was used to estimate the lung vessel centerlines. From these, the vascular trees were generated. For each vessel segment the tortuosity was calculated using distance metric. Fractal dimension was computed using 3D box counting. Hemodynamic data from RHC was used for correlation analysis.Distance metric, the readout of vessel tortuosity, correlated with mean pulmonary arterial pressure (Spearman correlation coefficient: ρ = 0.60) and other relevant parameters, like pulmonary vascular resistance (ρ = 0.59), arterio-venous difference in oxygen (ρ = 0.54), arterial (ρ = −0.54) and venous oxygen saturation (ρ = −0.68). Moreover, distance metric increased with increase of WHO functional class. In contrast, 3D fractal dimension was only significantly correlated with arterial oxygen saturation (ρ = 0.47).Automatic detection of the lung vascular tree can provide clinically relevant measures of blood vessel morphology. Non-invasive quantification of pulmonary vessel tortuosity may provide a tool to evaluate the severity of pulmonary hypertension.Trial RegistrationClinicalTrials.gov NCT01607489
Research in Simultaneous Localization and Mapping (SLAM) has made outstanding progress over the past years. SLAM systems are nowadays transitioning from academic to real world applications. However, this transition has posed new demanding challenges in terms of accuracy and robustness. To develop new SLAM systems that can address these challenges, new datasets containing cutting-edge hardware and realistic scenarios are required. We propose the Hilti SLAM Challenge Dataset. Our dataset contains indoor sequences of offices, labs, and construction environments and outdoor sequences of construction sites and parking areas. All these sequences are characterized by featureless areas and varying illumination conditions that are typical in real-world scenarios and pose great challenges to SLAM algorithms that have been developed in confined lab environments. Accurate sparse ground truth, at millimeter level, is provided for each sequence. The sensor platform used to record the data includes a number of visual, lidar, and inertial sensors, which are spatially and temporally calibrated. The purpose of this dataset is to foster the research in sensor fusion to develop SLAM algorithms that can be deployed in tasks where high accuracy and robustness are required, e.g., in construction environments. Many academic and industrial groups tested their SLAM systems on the proposed dataset in the Hilti SLAM Challenge. The results of the challenge, which are summarized in this paper, show that the proposed dataset is an important asset in the development of new SLAM algorithms that are ready to be deployed in the real-world.
Simultaneous Localization and Mapping (SLAM) is being deployed in real-world applications, however many stateof-the-art solutions still struggle in many common scenarios. A key necessity in progressing SLAM research is the availability of high-quality datasets and fair and transparent benchmarking. To this end, we have created the Hilti-Oxford Dataset, to push stateof-the-art SLAM systems to their limits. The dataset has a variety of challenges ranging from sparse and regular construction sites to a 17th century neoclassical building with fine details and curved surfaces. To encourage multi-modal SLAM approaches, we designed a data collection platform featuring a lidar, five cameras, and an IMU (Inertial Measurement Unit). With the goal of benchmarking SLAM algorithms for tasks where accuracy and robustness are paramount, we implemented a novel ground truth collection method that enables our dataset to accurately measure SLAM pose errors with millimeter accuracy. To further ensure accuracy, the extrinsics of our platform were verified with a micrometer-accurate scanner, and temporal calibration was managed online using hardware time synchronization. The multi-modality and diversity of our dataset attracted a large field of academic and industrial researchers to enter the second edition of the Hilti SLAM challenge, which concluded in June 2022. The results of the challenge show that while the top three teams could achieve an accuracy of 2 cm or better for some sequences, the performance dropped off in more difficult sequences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.