Human Attribute Segmentation (HAS) describes, pixelwise, the different semantic parts of people in an image. This fine-grained description is useful for several applications (e.g. security, fashion). However, despite the good performance reached by supervised Semantic Segmentation (SS) approaches, they are usually biased by the source training dataset and suffer from a performance drop when applied on new domains. Pixelwise image annotation for each new encountered context is tedious and expensive. So how can HAS become more robust to new contexts without new annotations? In this first study of Unsupervised Domain Adaptation (UDA) for HAS, we present UDA-HPTR, a new method based on HPTR [1] (Human Parsing with TRansformers) combined with self-supervised and semi-supervised learning paradigms to deal with UDA. UDA-HPTR improves performance on both source (labeled) and target (unlabeled) datasets compared to the fully supervised version (HPTR). It also outperforms HRDA, a state-of-the-art UDA method in autonomous driving benchmarks, by +6.7 on the source and +8.8 on the target, when applied to HAS while using only half the number of parameters.