Mapping information from photographic images to volumetric medical imaging scans is essential for linking spaces with physical environments, such as in image-guided surgery. Current methods of accurate photographic image to computed tomography (CT) image mapping can be computationally intensive and/or require specialized hardware. For general purpose 3-D mapping of bulk specimens in histological processing, a cost-effective solution is necessary. Here, we compare the integration of a commercial 3-D camera and cell phone imaging with a surface registration pipeline. Using surgical implants and chuck-eye steak as phantom tests, we obtain 3-D CT reconstruction and sets of photographic images from two sources: Canfield Imaging's H1 camera and an iPhone 14 Pro. We perform surface reconstruction from the photographic images using commercial tools and open-source code for Neural Radiance Fields (NeRF) respectively. We complete surface registration of the reconstructed surfaces with the iterative closest point (ICP) method. Manually placed landmarks were identified at three locations on each of the surfaces. Registration of the Canfield surfaces for three objects yields landmark distance errors of 1.747, 3.932, and 1.692 mm, while registration of the respective iPhone camera surfaces yields errors of 1.222, 2.061, and 5.155 mm. Photographic imaging of an organ sample prior to tissue sectioning provides a low-cost alternative to establish correspondence between histological samples and 3-D anatomical samples.