2023
DOI: 10.1016/j.cviu.2023.103773
|View full text |Cite
|
Sign up to set email alerts
|

MeT: A graph transformer for semantic segmentation of 3D meshes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 60 publications
0
2
0
Order By: Relevance
“…Transformers, originally stemming from NLP tasks [8], utilize a self-attention (SA) mechanism to capture dependencies among elements. Due to the strong capability of sequence modeling, hybrid transformers have been successfully applied to many vision tasks [9,10,13], including image classification, object detection, and semantic segmentation. However, few works [7,14] have applied action segmentation, as it is limited by its huge computational costs.…”
Section: Study On Efficient Transformersmentioning
confidence: 99%
See 1 more Smart Citation
“…Transformers, originally stemming from NLP tasks [8], utilize a self-attention (SA) mechanism to capture dependencies among elements. Due to the strong capability of sequence modeling, hybrid transformers have been successfully applied to many vision tasks [9,10,13], including image classification, object detection, and semantic segmentation. However, few works [7,14] have applied action segmentation, as it is limited by its huge computational costs.…”
Section: Study On Efficient Transformersmentioning
confidence: 99%
“…Transformers, originally stemming from natural language processing (NLP) tasks [8], have obtained various state-of-the-art performances for many vision tasks, including image classification [9], object detection [10][11][12], and semantic segmentation [13]. ASFormer [7] is the first transformer architecture for action segmentation and explicitly introduced local connectivity inductive and hierarchical representation to rebuild the transformer, obtaining impressive improvement, and whose self-attention mechanism plays a big role in hugely improving performance.…”
Section: Introductionmentioning
confidence: 99%