2021
DOI: 10.1016/j.bbe.2021.05.004
|View full text |Cite
|
Sign up to set email alerts
|

A deep learning based approach for classification of abdominal organs using ultrasound images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 28 publications
0
7
1
Order By: Relevance
“…The experiment was performed on a computer with an Intel CPU with a clock speed of 2.4 GHz and a Nvidia 20 series GPU using the Pytorch framework [ 33 ] and Cuda toolkit (version 11.6). As with previous literature [ 26 , 28 , 29 ] publicly available neural networks pre-trained on the ImageNet challenge dataset [ 27 ] were used as the basis for transfer learning. The neural networks architectures chosen for this experiment can be classified by the principles behind their design.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The experiment was performed on a computer with an Intel CPU with a clock speed of 2.4 GHz and a Nvidia 20 series GPU using the Pytorch framework [ 33 ] and Cuda toolkit (version 11.6). As with previous literature [ 26 , 28 , 29 ] publicly available neural networks pre-trained on the ImageNet challenge dataset [ 27 ] were used as the basis for transfer learning. The neural networks architectures chosen for this experiment can be classified by the principles behind their design.…”
Section: Methodsmentioning
confidence: 99%
“…Xu et al [ 28 ] examined classification of 11 ultrasound abdominal cross sections as part of a wider study on landmark detection, the Single-task learning (STL) ResNet-50 attained an accuracy of 81.22% in comparison to the radiologist who achieved 78.87%. Reddy et al [ 29 ], tested a number of neural networks on 6 visually distinct abdominal cross sections achieving an accuracy of 98.77% using a ResNet-50.…”
Section: Introductionmentioning
confidence: 99%
“…These studies show reduced accuracy where cross sections overlap or have visual similarities. Where a distinct dataset is used, that avoids these overlaps and visual similarities, accuracies of between 95.7% and 98.6% can be achieved [7]. This further highlights the limitations of using an image-only approach for abdominal cross sections, due to the lack of distinctive landmarks where there are overlapping classes within the imagery.…”
Section: Introductionmentioning
confidence: 95%
“…(e) Transverse approach of the left kidney (f) Transverse approach of the right kidney. These cross sections were chosen specifically based on classification error in previous studies [3][4][5]7] and due to visual similarity, such as with the left and right kidneys and over lapping region of interest (ROI) such as with Gall bladder and Common Bile Duct. Complex sweep scans of aorta and portal veins that contain both visual similarities and overlapping anatomical structures were also chosen to provide added complexity to classification.…”
Section: Datasetmentioning
confidence: 99%
“…The above studies all use the CNN network to study related diseases, but there were problems such as a lack of context information and fine‐grained features, the need for large number of parameters and computational costs, long computing times, and a loss of effective information due to small input data sizes. There were few studies on the classification of HCC and MHC using deep learning methods, and some studies used large deep networks to classify normal and abnormal livers 18,19 …”
Section: Introductionmentioning
confidence: 99%