2022 International Conference on Robotics and Automation (ICRA) 2022
DOI: 10.1109/icra46639.2022.9811784
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Camera Self-Calibration from Video

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…The HR‐Depth only includes a depth estimation network and a pose network, so it assumes that camera intrinsic parameters are known, which makes it impossible to train a self‐supervised model from arbitrary video sequences. Inspired by the self‐calibration method of a camera with a unified camera model (Fang et al., 2022), a camera network in the SMDEU is proposed to estimate camera intrinsic parameters in Figure 1. Thus, the SMDEU can simultaneously estimate monocular depth and learn camera pose and intrinsics from input video sequences.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The HR‐Depth only includes a depth estimation network and a pose network, so it assumes that camera intrinsic parameters are known, which makes it impossible to train a self‐supervised model from arbitrary video sequences. Inspired by the self‐calibration method of a camera with a unified camera model (Fang et al., 2022), a camera network in the SMDEU is proposed to estimate camera intrinsic parameters in Figure 1. Thus, the SMDEU can simultaneously estimate monocular depth and learn camera pose and intrinsics from input video sequences.…”
Section: Methodsmentioning
confidence: 99%
“…Inspired by the self-calibration method of a camera with a unified camera model (Fang et al, 2022), a camera network in the SMDEU is proposed to estimate camera intrinsic parameters in Figure 1. Thus, the SMDEU can simultaneously estimate monocular depth and learn camera pose and intrinsics from input video sequences.…”
Section: 2mentioning
confidence: 99%
“…The total degree freedom of camera parameters is 7(𝑡 has 2 degrees of freedom because the scale cannot be recovered, 𝑅 has three degrees of freedom, and each camera has 1 degree of which is equal to the fundamental matrix 𝐹's degree of freedom. The internal and external parameters of the image can be recovered using a self-calibration method [37]. But even if these constraints are met, The solved camera parameters by [37] suffers from large errors when the scene is approximately planar and the matching error is large.…”
Section: Estimation Of Image Intrinsic and Extrinsic Parametersmentioning
confidence: 99%
“…The internal and external parameters of the image can be recovered using a self-calibration method [37]. But even if these constraints are met, The solved camera parameters by [37] suffers from large errors when the scene is approximately planar and the matching error is large. Therefore, here we use the method of optimizing the objective function in [6] to solve the internal and external parameters of the camera.…”
Section: Estimation Of Image Intrinsic and Extrinsic Parametersmentioning
confidence: 99%
“…Substantial work has been done to develop and demonstrate the methods for extracting range data from monocular RGB images 4 6 Universally, these methods use convolutional neural networks (CNNs) to identify objects and determine their range. Since camera images are the source data, the locations in the scene are determined by geometric calculations based on the position of the detected objects on the sensor.…”
Section: Monocular Thermal Rangingmentioning
confidence: 99%