2023
DOI: 10.3390/cancers15051538
|View full text |Cite
|
Sign up to set email alerts
|

MVI-TR: A Transformer-Based Deep Learning Model with Contrast-Enhanced CT for Preoperative Prediction of Microvascular Invasion in Hepatocellular Carcinoma

Abstract: In this study, we considered preoperative prediction of microvascular invasion (MVI) status with deep learning (DL) models for patients with early-stage hepatocellular carcinoma (HCC) (tumor size ≤ 5 cm). Two types of DL models based only on venous phase (VP) of contrast-enhanced computed tomography (CECT) were constructed and validated. From our hospital (First Affiliated Hospital of Zhejiang University, Zhejiang, P.R. China), 559 patients, who had histopathological confirmed MVI status, participated in this … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 43 publications
0
3
0
Order By: Relevance
“…MA-Net [36] and RAU-Net [37] incorporate attention modules within the bottleneck and skip connection, respectively, aiming to capture inter-dependencies between channel and spatial dimensions, thereby enhancing the delineation of ambiguous boundaries in liver tumor segmentation. In particular, the self-attention mechanism adapted from vision transformer (ViT) architectures [38], by enabling interactions between all spatial locations in the input image, can focus adaptively on organs and lesion regions by extracting informative features from across the entire 3D volume to capture irregular boundaries and complex morphology not confined to local regions [39][40][41]. Though scarce in number, recent pioneering works have kindled explorations into SSL for liver tumor segmentation, recognizing the potential of harnessing both scarce annotated and copious unannotated data.…”
Section: Abdominal Organs and Liver Tumor Segmentationmentioning
confidence: 99%
“…MA-Net [36] and RAU-Net [37] incorporate attention modules within the bottleneck and skip connection, respectively, aiming to capture inter-dependencies between channel and spatial dimensions, thereby enhancing the delineation of ambiguous boundaries in liver tumor segmentation. In particular, the self-attention mechanism adapted from vision transformer (ViT) architectures [38], by enabling interactions between all spatial locations in the input image, can focus adaptively on organs and lesion regions by extracting informative features from across the entire 3D volume to capture irregular boundaries and complex morphology not confined to local regions [39][40][41]. Though scarce in number, recent pioneering works have kindled explorations into SSL for liver tumor segmentation, recognizing the potential of harnessing both scarce annotated and copious unannotated data.…”
Section: Abdominal Organs and Liver Tumor Segmentationmentioning
confidence: 99%
“…Inspired by the success of attention mechanisms, the transformer model was proposed as a complete shift from the sequential processing nature of recurrent neural networks (RNNs) and their variants [19][20][21][22]. The transformer model leverages attention mechanisms to process the input data in parallel, allowing for faster and more efficient computations.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, the article outlines the future research prospects of CRISPR/Cas9 in glioma treatment while highlighting potential opportunities and challenges in this domain [9]. In a recent study conducted by Cao et al, a pioneering transformer-based end-to-end deep learning model, MVI-TR, was introduced as substantial preoperative predictive utility for early-stage hepatocellular carcinoma patients [10]. Furthermore, an examination of recent research elucidating the involvement of WT1-associated protein (WTAP) in oncogenesis and its potential therapeutic implications was undertaken [11].…”
mentioning
confidence: 99%