2021
DOI: 10.1007/s00371-021-02092-8
|View full text |Cite
|
Sign up to set email alerts
|

Attention Unet++ for lightweight depth estimation from sparse depth samples and a single RGB image

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 41 publications
0
3
0
Order By: Relevance
“…In a similar fashion, MAPUNet [58], inspired by UNet++ [66], and UNet 3+ [19], exploited multi-scale feature fusion and supervision for monocular depth estimation. Moreover, a UNet++ variant with residual blocks and dense gated convolution based attention [60] was used for monocular depth estimation using sparse depth measurements [64]. Finally, Na-sUNet [56] employed neural architecture search to look for an efficient and effective UNet architecture, a finding shared by [48] as well.…”
Section: Related Workmentioning
confidence: 99%
“…In a similar fashion, MAPUNet [58], inspired by UNet++ [66], and UNet 3+ [19], exploited multi-scale feature fusion and supervision for monocular depth estimation. Moreover, a UNet++ variant with residual blocks and dense gated convolution based attention [60] was used for monocular depth estimation using sparse depth measurements [64]. Finally, Na-sUNet [56] employed neural architecture search to look for an efficient and effective UNet architecture, a finding shared by [48] as well.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, UNet++ neural network was proposed and later used by many scholars [23][24][25][26]. The UNet++ model is based on the UNet model and obtains the UNet++ network through nested connections.…”
Section: Introductionmentioning
confidence: 99%
“…To overcome this limitation, lightweight networks for depth completion tasks were proposed. Tao et al introduced lightweight depth completion with a Sobel edge prediction network [ 11 ] and self-attention-based multi-level feature integration and extraction [ 12 ]. Although these approaches contribute to decreasing the computational cost by effectively reducing the parameter size and model complexity, they cannot leverage or surpass the better performance of existing networks.…”
Section: Introductionmentioning
confidence: 99%