Close-range photogrammetry is widely used to measure the surface shape of various objects and its deformations. The classic approach for this is to use a stereo pair of images, which are captured from different angles using two digital video cameras. The surface shape is measured by triangulating a set of corresponding two-dimensional points from these images using a predetermined location of cameras relative to each other. Various algorithms are used to find these points. Several photogrammetry methods use cross-correlation for this purpose. This paper discusses the possibility of replacing the correlation algorithm with neural networks to determine displacements of small areas in the images. They allow increasing the calculation speed and the spatial resolution of the measurement results. To verify the possibility of using convolutional networks for photogrammetry tasks, computer and physical modeling were carried out. For the first test, a set of synthetically generated images representing images of the Particle Image Velocimetry method was used. The displacements of particles in the images are known, it allows to estimate the accuracy of processing of such images. For the second test, a series of experimental images with surfaces with different deformation was obtained. Computational experiments were performed to process synthetic and experimental images using selected neural networks and a classical cross-correlation algorithm. The limitations on the use of the compared algorithms were determined and their error in reconstructing the three-dimensional shape of the surface was evaluated. Computer and physical modeling have shown the operability and efficiency of neural networks for processing photogrammetry images.