2019
DOI: 10.48550/arxiv.1909.00948
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

HarDNet: A Low Memory Traffic Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…In this section, we compared our proposed model with the existing related gastrointestinal polyp segmentation methods, including EnhancedUNet [ 65 ], PraNet [ 19 ], SANet [ 56 ], HarDNet [ 66 ], ACSNet [ 67 ], DBHNet [ 20 ] and CFANet [ 22 ]. We also compare BFE-Net against state-of-the-art medical image segmentation methods: U-Net [ 16 ], U-Net++ [ 17 ], DeepLabV3 [ 68 ], ResUNet [ 18 ], AttUNet [ 69 ].…”
Section: Resultsmentioning
confidence: 99%
“…In this section, we compared our proposed model with the existing related gastrointestinal polyp segmentation methods, including EnhancedUNet [ 65 ], PraNet [ 19 ], SANet [ 56 ], HarDNet [ 66 ], ACSNet [ 67 ], DBHNet [ 20 ] and CFANet [ 22 ]. We also compare BFE-Net against state-of-the-art medical image segmentation methods: U-Net [ 16 ], U-Net++ [ 17 ], DeepLabV3 [ 68 ], ResUNet [ 18 ], AttUNet [ 69 ].…”
Section: Resultsmentioning
confidence: 99%
“…The networks in this paper are implemented on the Python-based deep learning framework PyTorch 1.11, using NVIDIA A40 with 48 G. To ensure fairness in method comparison, all network model hyperparameters are set uniformly: the number of training epochs is 100, the learning rate is 0.01, the batch size is 16, and the loss function uses crossentropy loss. The model comparisons include BiSeNetv2 [43], ConvNeXt [44], Danet [45], DDRNet [46], Deeplabv3plus [47], Unet [48], FCHarDNet [49], GCN [50], NestedUnet [51], PSPNet [52], SFNet [53], SegFormer [54], and Swin Transformer [55]. These state-of-the-art models were proved to perform well in traditional semantic segmentation-related downstream tasks.…”
Section: Implementation Detailsmentioning
confidence: 99%