2019
DOI: 10.3390/s19235082
|View full text |Cite
|
Sign up to set email alerts
|

A New Model of RGB-D Camera Calibration Based on 3D Control Field

Abstract: With extensive application of RGB-D cameras in robotics, computer vision, and many other fields, accurate calibration becomes more and more critical to the sensors. However, most existing models for calibrating depth and the relative pose between a depth camera and an RGB camera are not universally applicable to many different kinds of RGB-D cameras. In this paper, by using the collinear equation and space resection of photogrammetry, we present a new model to correct the depth and calibrate the relative pose … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(13 citation statements)
references
References 49 publications
0
13
0
Order By: Relevance
“…Currently, 3D dynamic reconstruction techniques mainly include time of flight (TOF) [3,4], binocular stereo vision (BSV) [5,6], and structured light (SL) [7,8]. All these techniques have advantages and disadvantages, which are as follows: (1) TOF technique: The TOF technique uses active sensing to capture 3D range data as per-pixel depths [9]. A near-infrared wave is emitted by the light source from the camera, and then its reflected wave is recorded by a dedicated sensor.…”
Section: Introductionmentioning
confidence: 99%
“…Currently, 3D dynamic reconstruction techniques mainly include time of flight (TOF) [3,4], binocular stereo vision (BSV) [5,6], and structured light (SL) [7,8]. All these techniques have advantages and disadvantages, which are as follows: (1) TOF technique: The TOF technique uses active sensing to capture 3D range data as per-pixel depths [9]. A near-infrared wave is emitted by the light source from the camera, and then its reflected wave is recorded by a dedicated sensor.…”
Section: Introductionmentioning
confidence: 99%
“…In other words, the plane containing the camera optical center (o') and re-projected 3D line (PQ) and another plane containing the camera optical center (o) and the 3D line (PQ) should conform to the geometric coplanar condition under the unified coordinate system. The relevant error function is expressed using (12). P s and P e are 3D endpoint coordinates of a 3D line in the reference frame coordinate system.…”
Section: ) Geometric Constraint Model Of 2d and 3d Linesmentioning
confidence: 99%
“…Given (12) and the chain derivation rule, the Jacobian matrix of the error term is presented in (13) as follows:…”
Section: ) Geometric Constraint Model Of 2d and 3d Linesmentioning
confidence: 99%
See 2 more Smart Citations