2020
DOI: 10.1109/jsen.2020.2968477
|View full text |Cite
|
Sign up to set email alerts
|

Single View 3D Reconstruction Based on Improved RGB-D Image

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 41 publications
0
7
0
Order By: Relevance
“…Some methods suggest learning a 3D shape model from key points or silhouettes [29]. In some studies, the single image's depth map is first calculated using machine-learning-based techniques, and then a 3D model is constructed using RGB-D images [30].…”
Section: Learning-based Reconstructionmentioning
confidence: 99%
“…Some methods suggest learning a 3D shape model from key points or silhouettes [29]. In some studies, the single image's depth map is first calculated using machine-learning-based techniques, and then a 3D model is constructed using RGB-D images [30].…”
Section: Learning-based Reconstructionmentioning
confidence: 99%
“…u init i is the 2D projection point of 3D point p i ðx; y; zÞ according to T j ¼ ½Rjt using where l is a scale factor, R and t are the rotation matrix and translation vector that define the camera pose, and K is the camera intrinsic matrix. 21 We designed a tiny fully convolutional image patch matching network named M_R_Net to find the refined position u refined i of u init i (shown in Figure 4). In M_R_Net architecture, image patches x and z are passed to network to get the output, which is a map of matching score, the Convolutional Neural Network (CNN) blocks in the branch of x and z are shared weights, the output of the CNN block undergoes a layer of 2D convolution to obtain the final output, the position with the highest score in the matching score map represents the best matching position of z in x.…”
Section: Image Patch Matchingmentioning
confidence: 99%
“…(5) en, formula ( 14) is used to move the points in the feature-rich region normally to obtain the filtered new point cloud data; (6) After all points are calculated in turn and new point cloud data are obtained through transformation, end. [31]. If the data sequence is discrete, it is called a discrete time series.…”
Section: Point Cloud Smoothness In Feature-rich Regionsmentioning
confidence: 99%