2021
DOI: 10.1109/access.2021.3076853
|View full text |Cite
|
Sign up to set email alerts
|

Depth Map Super-Resolution Using Guided Deformable Convolution

Abstract: Depth maps acquired by low-cost sensors have low spatial resolution, which restricts their usefulness in many image processing and computer vision tasks. To increase the spatial resolution of the depth map, most state-of-the-art depth map super-resolution methods based on deep learning extract the features from a high-resolution guidance image and concatenate them with the features from the depth map. However, such simple concatenation can transfer unnecessary textures, known as texture copying artifacts, of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 36 publications
0
1
0
Order By: Relevance
“…Cao et al [20] propose a novel dual auto-encoder attention network (DAEANet) which includes two auto-encoder networks, where guidance auto-encoder network (GAENet) and target auto-encoder network (TAENet) aim to extract feature information from intensity image and depth image. Kim et al [21] propose a novel depth image super-resolution method using guided deformable convolution, which obtains 2D kernel offsets of the depth features from the guidance features to significantly alleviate the texture copying artifacts in the resultant depth image. Guo et al [22] propose a two-branch network to achieve depth image super-resolution with highresolution guidance image, which can be viewed as a prior to guide the low-resolution depth image to restore the missing high-frequency details of structures.…”
Section: Introductionmentioning
confidence: 99%
“…Cao et al [20] propose a novel dual auto-encoder attention network (DAEANet) which includes two auto-encoder networks, where guidance auto-encoder network (GAENet) and target auto-encoder network (TAENet) aim to extract feature information from intensity image and depth image. Kim et al [21] propose a novel depth image super-resolution method using guided deformable convolution, which obtains 2D kernel offsets of the depth features from the guidance features to significantly alleviate the texture copying artifacts in the resultant depth image. Guo et al [22] propose a two-branch network to achieve depth image super-resolution with highresolution guidance image, which can be viewed as a prior to guide the low-resolution depth image to restore the missing high-frequency details of structures.…”
Section: Introductionmentioning
confidence: 99%