2022
DOI: 10.3390/s22218520
|View full text |Cite
|
Sign up to set email alerts
|

CMANet: Cross-Modality Attention Network for Indoor-Scene Semantic Segmentation

Abstract: Indoor-scene semantic segmentation is of great significance to indoor navigation, high-precision map creation, route planning, etc. However, incorporating RGB and HHA images for indoor-scene semantic segmentation is a promising yet challenging task, due to the diversity of textures and structures and the disparity of multi-modality in physical significance. In this paper, we propose a Cross-Modality Attention Network (CMANet) that facilitates the extraction of both RGB and HHA features and enhances the cross-m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 18 publications
(1 citation statement)
references
References 50 publications
0
1
0
Order By: Relevance
“…Semantic segmentation is mainly based on two elements of information, which are semantic information and spatial relationship features [ 38 ]. However, due to the different network structures of each model, it is impossible to obtain the same amount of information when obtaining both kinds of information.…”
Section: Resultsmentioning
confidence: 99%
“…Semantic segmentation is mainly based on two elements of information, which are semantic information and spatial relationship features [ 38 ]. However, due to the different network structures of each model, it is impossible to obtain the same amount of information when obtaining both kinds of information.…”
Section: Resultsmentioning
confidence: 99%