2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00205
|View full text |Cite
|
Sign up to set email alerts
|

Deep Material-Aware Cross-Spectral Stereo Matching

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
48
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 47 publications
(50 citation statements)
references
References 24 publications
0
48
0
Order By: Relevance
“…(aux) represents using the auxiliary loss during training. The STN(F) + SMN(aux)(ori) means the original image pairs instead of (Zhi et al 2018), where the DMC(w. seg.)…”
Section: Benchmark Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…(aux) represents using the auxiliary loss during training. The STN(F) + SMN(aux)(ori) means the original image pairs instead of (Zhi et al 2018), where the DMC(w. seg.)…”
Section: Benchmark Resultsmentioning
confidence: 99%
“…Then, the absolute difference between the warped image and the left image, also called reconstruction error or photometric loss, is minimized to supervise disparity predictions. Godard et al (Godard, Mac Aodha, and Brostow 2017) (Zhi et al 2018). The proposed method performs well on the challenging materials like clothes (row 1,2), vegetation (row 2), lights (row 3).…”
Section: Related Work Unsupervised Depth Estimationmentioning
confidence: 91%
See 2 more Smart Citations
“…Recently, also cross-spectral stereo camera approaches came up that combine, e.g., a color and an infrared camera for measuring multiple spectral components [42]. As image content is recorded at different spatial positions, the aim is to register heterogeneous content.…”
Section: Related Multi-spectral Imaging Techniquesmentioning
confidence: 99%