2022
DOI: 10.3389/fncom.2022.1001803
|View full text |Cite
|
Sign up to set email alerts
|

A computational classification method of breast cancer images using the VGGNet model

Abstract: Cancer is one of the most prevalent diseases worldwide. The most prevalent condition in women when aberrant cells develop out of control is breast cancer. Breast cancer detection and classification are exceedingly difficult tasks. As a result, several computational techniques, including k-nearest neighbor (KNN), support vector machine (SVM), multilayer perceptron (MLP), decision tree (DT), and genetic algorithms, have been applied in the current computing world for the diagnosis and classification of breast ca… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 26 publications
0
1
0
Order By: Relevance
“…These methods are easy to understand but often suffer efficiency problems due to 3D operations. Motivated by the superior performance of deep learning technology on detection or segmentation tasks (Cheon et al, 2022 ; Huang et al, 2022 ; Khan et al, 2022 ), image-based deep models have become popular for grasp detection (Chu et al, 2018 ; Zhang et al, 2019 ; Dong et al, 2021 ; Yu et al, 2022a ). These methods often use a rectangle representation g = ( x, y, h, w , θ), where ( x, y ) is the center pixel location of a grasp candidate, ( h, w ) are height and width of the gripper, and θ is the rotation of the gripper.…”
Section: Introductionmentioning
confidence: 99%
“…These methods are easy to understand but often suffer efficiency problems due to 3D operations. Motivated by the superior performance of deep learning technology on detection or segmentation tasks (Cheon et al, 2022 ; Huang et al, 2022 ; Khan et al, 2022 ), image-based deep models have become popular for grasp detection (Chu et al, 2018 ; Zhang et al, 2019 ; Dong et al, 2021 ; Yu et al, 2022a ). These methods often use a rectangle representation g = ( x, y, h, w , θ), where ( x, y ) is the center pixel location of a grasp candidate, ( h, w ) are height and width of the gripper, and θ is the rotation of the gripper.…”
Section: Introductionmentioning
confidence: 99%