Abstract. Augmenting X-ray imaging with 3D roadmap to improve guidance is a common strategy. Such approaches benefit from automated analysis of the X-ray images, such as the automatic detection and tracking of instruments. In this paper, we propose a real-time method to segment the catheter and guidewire in 2D X-ray fluoroscopic sequences. The method is based on deep convolutional neural networks. The network takes as input the current image and the three previous ones, and segments the catheter and guidewire in the current image. Subsequently, a centerline model of the catheter is constructed from the segmented image. A small set of annotated data combined with data augmentation is used to train the network. We trained the method on images from 182 X-ray sequences from 23 different interventions. On a testing set with images of 55 X-ray sequences from 5 other interventions, a median centerline distance error of 0.2 mm and a median tip distance error of 0.9 mm was obtained. The segmentation of the instruments in 2D X-ray sequences is performed in a real-time fully-automatic manner.
In minimal invasive image guided catheterization procedures, physicians require information of the catheter position with respect to the patient's vasculature. However, in fluoroscopic images, visualization of the vasculature requires toxic contrast agent. Static vasculature roadmapping, which can reduce the usage of iodine contrast, is hampered by the breathing motion in abdominal catheterization. In this paper, we propose a method to track the catheter tip inside the patient's 3D vessel tree using intra-operative single-plane 2D X-ray image sequences and a peri-operative 3D rotational angiography (3DRA). The method is based on a hidden Markov model (HMM) where states of the model are the possible positions of the catheter tip inside the 3D vessel tree. The transitions from state to state model the probabilities for the catheter tip to move from one position to another. The HMM is updated following the observation scores, based on the registration between the 2D catheter centerline extracted from the 2D X-ray image, and the 2D projection of 3D vessel tree centerline extracted from the 3DRA. The method is extensively evaluated on simulated and clinical datasets acquired during liver abdominal catheterization. The evaluations show a median 3D tip tracking error of 2.3 mm with optimal settings in simulated data. The registered vessels close to the tip have a median distance error of 4.7 mm with angiographic data and optimal settings. Such accuracy is sufficient to help the physicians with an up-to-date roadmapping. The method tracks in real-time the catheter tip and enables roadmapping during catheterization procedures.
Cribriform growth patterns in prostate carcinoma are associated with poor prognosis. We aimed to introduce a deep learning method to detect such patterns automatically. To do so, convolutional neural network was trained to detect cribriform growth patterns on 128 prostate needle biopsies. Ensemble learning taking into account other tumor growth patterns during training was used to cope with heterogeneous and limited tumor tissue occurrences. ROC and FROC analyses were applied to assess network performance regarding detection of biopsies harboring cribriform growth pattern. The ROC analysis yielded a mean area under the curve up to 0.81. FROC analysis demonstrated a sensitivity of 0.9 for regions larger than $${0.0150}\,\hbox {mm}^{2}$$ 0.0150 mm 2 with on average 7.5 false positives. To benchmark method performance for intra-observer annotation variability, false positive and negative detections were re-evaluated by the pathologists. Pathologists considered 9% of the false positive regions as cribriform, and 11% as possibly cribriform; 44% of the false negative regions were not annotated as cribriform. As a final experiment, the network was also applied on a dataset of 60 biopsy regions annotated by 23 pathologists. With the cut-off reaching highest sensitivity, all images annotated as cribriform by at least 7/23 of the pathologists, were all detected as cribriform by the network and 9/60 of the images were detected as cribriform whereas no pathologist labelled them as such. In conclusion, the proposed deep learning method has high sensitivity for detecting cribriform growth patterns at the expense of a limited number of false positives. It can detect cribriform regions that are labelled as such by at least a minority of pathologists. Therefore, it could assist clinical decision making by suggesting suspicious regions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.