2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01447
|View full text |Cite
|
Sign up to set email alerts
|

UDA-COPE: Unsupervised Domain Adaptation for Category-level Object Pose Estimation

Abstract: Test-time adaptation methods have been gaining attention recently as a practical solution for addressing sourceto-target domain gaps by gradually updating the model without requiring labels on the target data. In this paper, we propose a method of test-time adaptation for category-level object pose estimation called TTA-COPE. We design a pose ensemble approach with a self-training loss using poseaware confidence. Unlike previous unsupervised domain adaptation methods for category-level object pose estimation, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(12 citation statements)
references
References 54 publications
0
12
0
Order By: Relevance
“…To address this issue, prior-based methods [21,3,30,7,17,10] leverage category-specific 3D priors (templates) to guide pose estimation. They adopt a prior-driven deformation module [30] to deform the prior for synthesizing the target object in world-space.…”
Section: Prior-based Methodsmentioning
confidence: 99%
“…To address this issue, prior-based methods [21,3,30,7,17,10] leverage category-specific 3D priors (templates) to guide pose estimation. They adopt a prior-driven deformation module [30] to deform the prior for synthesizing the target object in world-space.…”
Section: Prior-based Methodsmentioning
confidence: 99%
“…Other methods such as [29,30,31,32,33] predict pose and bounding box without reconstructing the full shape of the object. For the evaluation presented in this work, we limit ourselves to methods that perform both reconstruction and pose estimation, although our evaluation protocol could in principle be used for pure pose estimation methods as well.…”
Section: Related Workmentioning
confidence: 99%
“…Only methods that estimate both pose and shape and do not require an initial pose estimate are included. Therefore, pure categorial pose estimation methods such as [3,29,33,58] are excluded; tracking methods that require an initial pose estimate such as [59,60,61] are excluded; and finally, methods that explicitly rely on objects being upright such as [34,2,62] are excluded.…”
Section: Related Workmentioning
confidence: 99%
“…[2,46] gain the ability to generalize by mapping the input shape to normalized or metric-scale canonical spaces and then recovering the objects' poses via correspondence matching. Better handling of intra-category shape variation is also achieved by leveraging shape priors [4,42,56], symmetry priors [20], or domain adaptation [17,21]. Additionally, [5] enhances the perceptiveness of local geometry, and [7,55] exploit geometric consistency terms to improve the performance further.…”
Section: Introductionmentioning
confidence: 99%
“…Despite the remarkable progress of existing methods, there is still room for improvement in the performance of the category-level object pose estimation. Reconstruction and matching-based methods [17,42,46] are usually limited in speed due to the time-consuming correspondencematching procedure. Recently, various methods [5,7,20,55,56] built on 3D graph convolution (3D-GC) [23] have achieved impressive performance and run in real-time.…”
Section: Introductionmentioning
confidence: 99%