2018
DOI: 10.1007/978-3-030-01261-8_45
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Fuse Proposals from Multiple Scanline Optimizations in Semi-Global Matching

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
42
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 54 publications
(42 citation statements)
references
References 43 publications
0
42
0
Order By: Relevance
“…It has three SGA layers, two LGA layers and fifteen 3D convolutional layers for cost aggregation. (9,12), (7,14), (5,16), (3,18), (17,20), (15,22), (13,24), (11,26), ( from (3), 3×3 conv 1 /3H× 1 /3W×32 (7) 3×3 conv (no bn & relu) 1 /3H× 1 /3W×640 (8) split, reshape, normalize 4× 1 /3H× 1 /3W×5×32 (9)- (11) from (6), repeat (6)-(8) 4× 1 /3H× 1 /3W×5×32 (12) from (1), 3×3 conv H×W×16 (13) 3×3 conv (no bn & relu) H×W×75 (14) split, reshape, normalize H×W×75 (15)- (17) from (12), repeat (12)-(14) H×W×75 Cost Aggregation input 4D cost volume 1 /3H× 1 /3W×48×64 [1] 3×3×3, 3D conv 1 /3H× 1 /3W×48×32 [2] SGA layer: weight matrices from (5) 1 /3H× 1 /3W×48×32 [3] 3×3×3, 3D conv 1 /3H× 1 /3W×48×32 output 3×3×3, 3D to 2D conv, upsamping H×W×193 softmax, regression, loss weight: 0.2 H×W×1 [4] 3×3×3, 3D conv, stride 2 1 /6H× 1 /6W×48×48 [5] 3×3×3, 3D conv, stride 2 1 /12H× 1 /12W×48×64 [6] 3×3×3, 3D deconv, stride 2 1 /6H× 1 /6W×48×48 [7] 3×3×3, 3D conv 1 /6H× 1 /6W×48×48 [8] 3×3×3, 3D deconv, stride 2 1 /3H× 1 /3W×48×32 [9] 3×3×3, 3D conv 1 /3H× 1 /3W×48×32 [10] SGA layer: weight matrices from (8) 1 /3H× 1 /3W×48×32 output 3×3×3, 3D to 2D conv, upsamping H×W×193 softmax, regression, loss weight: 0.6 H×W×1 [11] 3×3×3, 3D conv...…”
Section: Resultsmentioning
confidence: 99%
“…It has three SGA layers, two LGA layers and fifteen 3D convolutional layers for cost aggregation. (9,12), (7,14), (5,16), (3,18), (17,20), (15,22), (13,24), (11,26), ( from (3), 3×3 conv 1 /3H× 1 /3W×32 (7) 3×3 conv (no bn & relu) 1 /3H× 1 /3W×640 (8) split, reshape, normalize 4× 1 /3H× 1 /3W×5×32 (9)- (11) from (6), repeat (6)-(8) 4× 1 /3H× 1 /3W×5×32 (12) from (1), 3×3 conv H×W×16 (13) 3×3 conv (no bn & relu) H×W×75 (14) split, reshape, normalize H×W×75 (15)- (17) from (12), repeat (12)-(14) H×W×75 Cost Aggregation input 4D cost volume 1 /3H× 1 /3W×48×64 [1] 3×3×3, 3D conv 1 /3H× 1 /3W×48×32 [2] SGA layer: weight matrices from (5) 1 /3H× 1 /3W×48×32 [3] 3×3×3, 3D conv 1 /3H× 1 /3W×48×32 output 3×3×3, 3D to 2D conv, upsamping H×W×193 softmax, regression, loss weight: 0.2 H×W×1 [4] 3×3×3, 3D conv, stride 2 1 /6H× 1 /6W×48×48 [5] 3×3×3, 3D conv, stride 2 1 /12H× 1 /12W×48×64 [6] 3×3×3, 3D deconv, stride 2 1 /6H× 1 /6W×48×48 [7] 3×3×3, 3D conv 1 /6H× 1 /6W×48×48 [8] 3×3×3, 3D deconv, stride 2 1 /3H× 1 /3W×48×32 [9] 3×3×3, 3D conv 1 /3H× 1 /3W×48×32 [10] SGA layer: weight matrices from (8) 1 /3H× 1 /3W×48×32 output 3×3×3, 3D to 2D conv, upsamping H×W×193 softmax, regression, loss weight: 0.6 H×W×1 [11] 3×3×3, 3D conv...…”
Section: Resultsmentioning
confidence: 99%
“…Batsos et al proposed CBMV [1] to combine evidence from multiple basic matching costs. Schonberger et al [19] proposed to classify scanline matching cost candidates with a random forest classifier. Seki et al proposed SGM-Nets [20] to provide learned penalties for SGM.…”
Section: Learning Based Methodsmentioning
confidence: 99%
“…GC-Net [6] and PSMNet [2] construct concatenation-based feature volume and incorporate a 3D CNN to aggregate contextual features. There are also works [1,19] trying to aggregate evidence from multiple hand-crafted matching cost proposals. However, the above methods have several drawbacks.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…SGM has been applied in numerous fields, including building reconstruction, digital surface model generation, robot navigation, driver assistance and so forth [7][8][9]. However, the energy summation from all scanlines and the corresponding WTA strategy are empirical steps without a theoretical background, which is essentially inadequate when different scanlines propose inconsistent solutions [10]. Schönberger et al [10] proposed SGM-Forest, which trained a random forest to predict a scanline with the best disparity proposal.…”
Section: Introductionmentioning
confidence: 99%