2021
DOI: 10.1007/s13042-020-01251-y
|View full text |Cite
|
Sign up to set email alerts
|

Attention-based context aggregation network for monocular depth estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 66 publications
(33 citation statements)
references
References 40 publications
0
33
0
Order By: Relevance
“…We will further explore the possibilities of deploying proposed ideas on interesting applications such as image manipulation or editing, image-to-image transfer, 3D view rendering, contrast enhancement, refocusing, object recognition, realistic integration of virtual objects in augmented reality. [32] 0.586 DeepLabV3+ [33] 0.575 Multi-Task Light-Weight-RefineNet [34] 0.565 RelativeDepth [35] 0.538 SC-SfMLearner-ResNet18 [36] 0.536 SDC-Depth [37] 0.497 ACAN [38] 0.496 DORN [39] 0.509 SENet-154 [40] 0.530 Our (3DBGES-UNet) 0.1857…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We will further explore the possibilities of deploying proposed ideas on interesting applications such as image manipulation or editing, image-to-image transfer, 3D view rendering, contrast enhancement, refocusing, object recognition, realistic integration of virtual objects in augmented reality. [32] 0.586 DeepLabV3+ [33] 0.575 Multi-Task Light-Weight-RefineNet [34] 0.565 RelativeDepth [35] 0.538 SC-SfMLearner-ResNet18 [36] 0.536 SDC-Depth [37] 0.497 ACAN [38] 0.496 DORN [39] 0.509 SENet-154 [40] 0.530 Our (3DBGES-UNet) 0.1857…”
Section: Discussionmentioning
confidence: 99%
“…We choice to compare models not using extra training data. The comparison is performed with Zhu et al [31] (SOM), Xu et al [32] DeepLabV3+ [33], Multi-Task Light-Weight-RefineNet [34], Rel-ativeDepth [35], SC-SfMLearner-ResNet18 [36], SDC-Depth [37], ACAN [38], DORN [39], and SENet-154 [40] depth estimation methods. The results in Table 1 and Table 2 are quite impressive and onpar with state-of-the-art methods.…”
Section: Evaluation and Comparative Analysismentioning
confidence: 99%
“…In recent years, deep convolutional networks have been applied to depth estimation and have achieved excellent results such as [2][3][4][5][6][7][8][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24]. Now we generally considered that the beginning of the depth estimation of a single image based on deep learning is Eigen et al [2].…”
Section: A Monocular Depth Estimationmentioning
confidence: 99%
“…Laina et al [7] proposed a network based on FCRN (Fully Convolutional Residual Networks) to a depth map or depth maps. Yuru et al [24] added an attention mechanism to the classification algorithm, combined with contextual content, and it also used the soft classification method to improve the quality of prediction depth. Wu et al [23] applied (Atrous Spatial Pyramid Pooling) ASPP to depth estimation tasks.…”
Section: A Monocular Depth Estimationmentioning
confidence: 99%
“…To capture more context information, traditional CNN-based methods mainly refer to enlarging receptive fields of convolution operations, such as using large-size kernels, dilated convolution [ 9 ], ASPP [ 10 ] and DenseASPP [ 11 ], etc. Recently, benefitting from the nice capability to capture context information, the attention mechanism has been widely applied to monocular depth estimation [ 12 , 13 , 14 ]. Combining traditional CNN-based models with attention mechanism hugely advances the ability to capture more context information.…”
Section: Introductionmentioning
confidence: 99%