2023
DOI: 10.1007/s11042-023-15524-5
|View full text |Cite
|
Sign up to set email alerts
|

TC-SegNet: robust deep learning network for fully automatic two-chamber segmentation of two-dimensional echocardiography

Abstract: Heart chamber quantification is an essential clinical task to analyze heart abnormalities by evaluating the heart volume estimated through the endocardial border of the chambers. A precise heart chamber segmentation algorithm using echocardiography is essential for improving the diagnosis of cardiac disease. This paper proposes a robust two chamber segmentation network (TC-SegNet) for echocardiography which follows a U-Net architecture and effectively incorporates the proposed modified skip connection, Atrous … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 38 publications
(24 reference statements)
0
4
0
Order By: Relevance
“…This section examines the literature on efficient procedures and compares their performance to find out how they stack up against one another. For the cardiac image segmentation, BiSeNet [26], U-Net [30], FASTR-SCANN [32], U-Net-Transformer [33], OFHCSS [35] and U-Net-YOLOv7 [36] are considered for the evaluation. Then, CNN-ResNet [37], CNN-LSTM [41], Xception [45], 1D-CNN [46], CNN [47] and ShuffleNet [48] are taken for the cardiac view classification.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…This section examines the literature on efficient procedures and compares their performance to find out how they stack up against one another. For the cardiac image segmentation, BiSeNet [26], U-Net [30], FASTR-SCANN [32], U-Net-Transformer [33], OFHCSS [35] and U-Net-YOLOv7 [36] are considered for the evaluation. Then, CNN-ResNet [37], CNN-LSTM [41], Xception [45], 1D-CNN [46], CNN [47] and ShuffleNet [48] are taken for the cardiac view classification.…”
Section: Resultsmentioning
confidence: 99%
“…4 shows the comparison of various DL based segmentation models in terms of accuracy. It is observed that the U-Net [30] 4.05%, 2.37%, 8.63%, 20.25% and 7.95% higher than BiSeNet, FASTR-SCANN, U-Net-Transformer, OFHCSS and U-Net-YOLOv7. This results indicates that the U-Net [30] has better segmentation accuracy than other models as it employs integrates U-Net with MSC, ASPP and SEM.…”
Section: π΄π‘π‘π‘’π‘Ÿπ‘Žπ‘π‘¦ = π‘‡π‘Ÿπ‘’π‘’ π‘ƒπ‘œπ‘ π‘–π‘‘π‘–π‘£π‘’ (𝑇𝑃)+π‘‡π‘Ÿπ‘’π‘’ π‘π‘’π‘”π‘Žπ‘‘π‘–π‘£π‘’ (𝑇𝑁)mentioning
confidence: 94%
See 2 more Smart Citations