2022
DOI: 10.1016/j.knosys.2022.108824
|View full text |Cite
|
Sign up to set email alerts
|

MIA-Net: Multi-information aggregation network combining transformers and convolutional feature learning for polyp segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 26 publications
(9 citation statements)
references
References 10 publications
0
9
0
Order By: Relevance
“…Furthermore, Pan et al 24 utilized a Hierarchical Transformer as an encoder to extract more potent multiscale features. Li et al 25 proposed a multiinformation aggregation network model called MIA-Net, which combines Transformer and convolutional feature learning. In 2024, Liu et al 26 proposed a new visual transformer model by introducing an attention mechanism and pyramid structure to improve the segmentation results.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, Pan et al 24 utilized a Hierarchical Transformer as an encoder to extract more potent multiscale features. Li et al 25 proposed a multiinformation aggregation network model called MIA-Net, which combines Transformer and convolutional feature learning. In 2024, Liu et al 26 proposed a new visual transformer model by introducing an attention mechanism and pyramid structure to improve the segmentation results.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, Transformer-based ideas turn popular [30]. The common way is to combine both Transformer and CNN for robust feature exploration [8,53]. For example, MIA-Net [8] uses both to capture global dependences and low-level spatial details.…”
Section: Polyp Segmentation In Imagesmentioning
confidence: 99%
“…The common way is to combine both Transformer and CNN for robust feature exploration [8,53]. For example, MIA-Net [8] uses both to capture global dependences and low-level spatial details. Some studies [4,6,7,9,54] adopt pure Transformers for feature abstraction.…”
Section: Polyp Segmentation In Imagesmentioning
confidence: 99%
See 2 more Smart Citations