Workflows can be optimized with the use of three-dimensional (3D) computer vision (CV) by monitoring processes from many angles. The potential for increased efficiency and cost savings is clear here. Input scenarios are always being monitored by security and surveillance systems to recognize and classify items, human faces, and moving things. The input reconstruction in these surveillance systems is problematic because of the mapping from three dimensions to two dimensions in the form of pixels. We present a new 3D CV paradigm-based framework for facial recognition and reconstruction. In this setup, deep neural networks are used to distinguish between mapped and unmapped pixels during a 2D to 3D or 3D to 2D transformation. To use correlation analysis to recognize human faces, it is necessary to first examine the input image for textural properties. The mapping procedure continues with the extraction of textural elements using dimensional contours. Then, continuous contours are used to do the 3D mapping and reduce the number of false positives. Instead, this framework isolates individual contours of moving objects or faces to increase the number of training iterations. Finally, missing pixels in the dimension conversion are filled to rebuild the human face utilizing discrete and continuous outlines. Hence, the suggested framework minimizes the false rate (10%), increases the false positive rate as (13.93%), and minimizes the error by (9.89%) while maximizing the recognition accuracy for (13.93%) and the precision is (13.21%).