2021
DOI: 10.1109/access.2021.3094052
|View full text |Cite
|
Sign up to set email alerts
|

Foreground-Aware Stylization and Consensus Pseudo-Labeling for Domain Adaptation of First-Person Hand Segmentation

Abstract: Hand segmentation is a crucial task in first-person vision. Since first-person images exhibit strong bias in appearance among different environments, adapting a pre-trained segmentation model to a new domain is required in hand segmentation. Here, we focus on appearance gaps for hand regions and backgrounds separately. We propose (i) foreground-aware image stylization and (ii) consensus pseudolabeling for domain adaptation of hand segmentation. We stylize source images independently for the foreground and back… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 52 publications
0
4
0
Order By: Relevance
“…As shown in Fig. 3, generating data using synthetic models [8,23,37,39,65] is costeffective, but it crates unrealistic hand texture [41]. Although the hand-marker-based annotation [15,55,62] can automatically track the 6-DoF of sensor-attached hand joints, the sensors distort the hand appearance and hinder the natural hand movement.…”
Section: Challenges In Dataset Constructionmentioning
confidence: 99%
See 2 more Smart Citations
“…As shown in Fig. 3, generating data using synthetic models [8,23,37,39,65] is costeffective, but it crates unrealistic hand texture [41]. Although the hand-marker-based annotation [15,55,62] can automatically track the 6-DoF of sensor-attached hand joints, the sensors distort the hand appearance and hinder the natural hand movement.…”
Section: Challenges In Dataset Constructionmentioning
confidence: 99%
“…Ohkawa et al . proposed foreground-aware image stylization to convert the simulation texture in the ObMan data to a more realistic one while separating the hand regions and backgrounds [41]. However, the ObMan data provide static hand images with hand-held objects but without hand motion.…”
Section: Synthetic-model-based Annotationmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, Tokunaga et al [ 14 ] utilized pseudo-labeling and class proportion to realize semantic segmentation. Ohkawa et al [ 16 ] proposed consensus pseudo-labeling for segmenting the hand image. Zou et al [ 17 ] generated structured pseudo-labels for semantic segmentation.…”
Section: Introductionmentioning
confidence: 99%