The main objective of this study is to develop a single-camera unit-based three-dimensional surface imaging technique that could be used to reduce the disparity error in three-dimensional (3D) image reconstruction and simplify the calibration process of the imaging system. The current advanced stereoscopic 3D imaging system uses a pair of imaging devices (e.g., complementary metal-oxide semiconductor (CMOS) or charge-coupled device (CCD)), imaging lenses, and other accessories (e.g., light sources, polarizing filters) and diffusers.) To reconstruct the 3D scene, the system needs to calibrate the camera and compute a disparity map. However, in most cases in the industry, a pair of imaging devices is not ideally identical, so it is a necessary step to finely adjust and compensate for camera orientation, lens focal length, and intrinsic parameters for each camera.More importantly, conventional stereoscopic systems may respond differently to incident light reflected from the target surface. It is possible for the pixel information in the left and right images to be slightly different. This results in an increase in disparity error, even though the stereovision system is calibrated and compensated for rotation and vertical offsets between two cameras. This thesis aims to solve the aforementioned challenges by proposing a new stereo vision scheme based on only one camera to obtain target 3D data by 3D image reconstruction of two images obtained from two different camera positions.iii DEDICATION This thesis is dedicated to my family, who always supports me and prays for my success.