Background
Coronary artery angiography is an indispensable assistive technique for cardiac interventional surgery. Segmentation and extraction of blood vessels from coronary angiographic images or videos are very essential prerequisites for physicians to locate, assess and diagnose the plaques and stenosis in blood vessels.
Methods
This article proposes a novel coronary artery segmentation framework that combines a three–dimensional (3D) convolutional input layer and a two–dimensional (2D) convolutional network. Instead of a single input image in the previous medical image segmentation applications, our framework accepts a sequence of coronary angiographic images as input, and outputs the clearest mask of segmentation result. The 3D input layer leverages the temporal information in the image sequence, and fuses the multiple images into more comprehensive 2D feature maps. The 2D convolutional network implements down–sampling encoders, up–sampling decoders, bottle–neck modules, and skip connections to accomplish the segmentation task.
Results
The spatial–temporal model of this article obtains good segmentation results despite the poor quality of coronary angiographic video sequences, and outperforms the state–of–the–art techniques.
Conclusions
The results justify that making full use of the spatial and temporal information in the image sequences will promote the analysis and understanding of the images in videos.
Background
Coronary heart disease is one of the diseases with the highest mortality rate. Due to the important position of cardiovascular disease prevention and diagnosis in the medical field, the segmentation of cardiovascular images has gradually become a research hotspot. How to segment accurate blood vessels from coronary angiography videos to assist doctors in making accurate analysis has become the goal of our research.
Method
Based on the U-net architecture, we use a context-based convolutional network for capturing more information of the vessel in the video. The proposed method includes three modules: the sequence encoder module, the sequence decoder module, and the sequence filter module. The high-level information of the feature is extracted in the encoder module. Multi-kernel pooling layers suitable for the extraction of blood vessels are added before the decoder module. In the filter block, we add a simple temporal filter to reducing inter-frame flickers.
Results
The performance comparison with other method shows that our work can achieve 0.8739 in Sen, 0.9895 in Acc. From the performance of the results, the accuracy of our method is significantly improved. The performance benefit from the algorithm architecture and our enlarged dataset.
Conclusion
Compared with previous methods that only focus on single image analysis, our method can obtain more coronary information through image sequences. In future work, we will extend the network to 3D networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.