2020
DOI: 10.1088/1742-6596/1693/1/012183
|View full text |Cite
|
Sign up to set email alerts
|

Automatic segmentation in fetal ultrasound images based on improved U-net

Abstract: As an effective way of routine prenatal diagnosis, ultrasound (US) imaging has been widely used in clinical practice. Biosignatures obtained from fetal segmentation contribute to fetal development and health monitoring. However, the artifacts, speckle noises, quality of imaging equipment and other factors make the segmentation of fetal US images extremely challenging. In this paper, aiming to improve the depth of the model, as well as to avoid the vanishing gradient problem and exploding gradient problem, we p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 10 publications
0
4
0
Order By: Relevance
“…Due to the large differences in tumor location and size in different images, the network needs to have sufficient receptive field and powerful spatial multi-scale processing capability. However, unlike Atrous Spatial Pyramid Pooling (ASPP), U-Net cannot master as much semantic information of different scales as possible, nor can it satisfy the accurate segmentation of the region near the target edge in the image [ 17 ]. Therefore, the FPN structure is mainly added to U-Net in this paper to improve U-Net's ability to integrate multi-scale semantic information and enrich the information contained in the features used in pixel tag classification.…”
Section: Methodsmentioning
confidence: 99%
“…Due to the large differences in tumor location and size in different images, the network needs to have sufficient receptive field and powerful spatial multi-scale processing capability. However, unlike Atrous Spatial Pyramid Pooling (ASPP), U-Net cannot master as much semantic information of different scales as possible, nor can it satisfy the accurate segmentation of the region near the target edge in the image [ 17 ]. Therefore, the FPN structure is mainly added to U-Net in this paper to improve U-Net's ability to integrate multi-scale semantic information and enrich the information contained in the features used in pixel tag classification.…”
Section: Methodsmentioning
confidence: 99%
“…118,119 Two studies focused on amniotic fluid with one evaluating automatic measurement of amniotic fluid index 120 and the other assessing segmentation of amniotic fluid and fetal tissue. 121 The remaining studies included a study to assess machine learning to determine occiput anterior versus occipitoposterior position in the second stage of labor, 122 machine learning assessment of fetal-lung texture in pregnancies affected by gestational diabetes or preeclampsia compared to normal pregnancies, 123 use of an AI classifier to recognize fetal facial expressions on 4D ultrasound, 124 detection of the FASP, 125 automatic detection of the fetal face on 3D ultrasound, 126 assessment of fetal presentation and confirmation of fetal cardiac activity, 127 segmentation of the fetal thoracic wall, 128 classification of the umbilical cord into normocoiling, hypocoiling and hypercoiling, 129 automated grading of hydronephrosis on ultrasound, 130 segmentation of the fetal kidneys, 131 segmentation of the AC and FL, 132 assessment of an image reconstruction framework applied to the whole fetus 133 and automated detection of fetal standard planes. 134,135 A summary of results is reported in Table 8.…”
Section: Number Of Patients Inclusion Criteria Description Of Artific...mentioning
confidence: 99%
“…It has been stated that the proposed method has a great potential to support physicians. Yang et al (2020) used Residual U-Net and ASPP U-net models for automatic segmentation of biometric parameters AC, FL and CRL. Residual U-Net was used for the gradient problem and ASPP U-Net was used to increase the accuracy of the segmentation without increasing the depth of the model.…”
Section: Deep Learningmentioning
confidence: 99%