2018
DOI: 10.1364/oe.26.008179
|View full text |Cite
|
Sign up to set email alerts
|

Depth and thermal sensor fusion to enhance 3D thermographic reconstruction

Abstract: Three-dimensional geometrical models with incorporated surface temperature data provide important information for various applications such as medical imaging, energy auditing, and intelligent robots. In this paper we present a robust method for mobile and real-time 3D thermographic reconstruction through depth and thermal sensor fusion. A multimodal imaging device consisting of a thermal camera and a RGB-D sensor is calibrated geometrically and used for data capturing. Based on the underlying principle that t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 42 publications
(13 citation statements)
references
References 18 publications
0
13
0
Order By: Relevance
“…Recent studies have used the thermographic error model to improve robustness. More recently, methods have been introduced that combine thermographic error models to improve robustness [21], [22]. However, these methods that heavily rely on RGB-D sensors and are less suitable for an outdoor environment.…”
Section: Related Workmentioning
confidence: 99%
“…Recent studies have used the thermographic error model to improve robustness. More recently, methods have been introduced that combine thermographic error models to improve robustness [21], [22]. However, these methods that heavily rely on RGB-D sensors and are less suitable for an outdoor environment.…”
Section: Related Workmentioning
confidence: 99%
“…Estimating the dense and accurate depth of a scene from a single RGB image is one of the fundamental problems of computer vision and essential for various applications, such as scene understanding [1][2][3][4], 3D modeling [5,6], robotics [7,8], virtual reality [9], and autonomous driving [10]. Given the training set RGB image and the corresponding depth map of the image, depth prediction can be regarded as a pixel-level regression problem; that is, the model directly learns to predict the depth corresponding to each pixel in the single image.…”
Section: Introductionmentioning
confidence: 99%
“…Differently, Cao et al [17] and Yue et al [18] optimized the accumulated error by first identifying the loop closures formed through successful 3D registration between each current frame and other earlier frames, and then performing a pose graph optimization [19] to reduce the sensor poses drifts. However, in their works the loop closures are identifying either by manually checking the 3D point cloud overlapping ratio [17], or by using the measurement system setup information [18], which prevents their further usage in a practical 3D scanning system. Moreover, the pose graph optimization in [17,18] only optimized the inconsistency between two associated sensor poses and their relative pose constraint; it ignores important surface consistency information in the 3D registration process [6].…”
Section: Introductionmentioning
confidence: 99%
“…However, in their works the loop closures are identifying either by manually checking the 3D point cloud overlapping ratio [17], or by using the measurement system setup information [18], which prevents their further usage in a practical 3D scanning system. Moreover, the pose graph optimization in [17,18] only optimized the inconsistency between two associated sensor poses and their relative pose constraint; it ignores important surface consistency information in the 3D registration process [6].…”
Section: Introductionmentioning
confidence: 99%