Abstract-Contactless fingerprint recognition systems are being researched in order to reduce intrinsic limitations of traditional biometric acquisition technologies, encompassing the release of latent fingerprints on the sensor platen, non-linear spatial distortions in the captured samples, and relevant image differences with respect to the moisture level and pressure of the fingertip on the sensor surface.Fingerprint images captured by single cameras, however, can be affected by perspective distortions and deformations due to incorrect alignments of the finger with respect to the camera optical axis. These non-idealities can modify the ridge pattern and reduce the visibility of the fingerprint details, thus decreasing the recognition accuracy. Some systems in the literature overcome this problem by computing three-dimensional models of the finger. Unfortunately, such approaches are usually based on complex and expensive acquisition setups, which limit their portability in consumer devices like mobile phones and tablets.In this paper, we present a novel approach able to recover perspective deformations and improper fingertip alignments in single camera systems. The approach estimates the orientation difference between two contactless fingerprint acquisitions by using neural networks, and permits to register the considered samples by applying the estimated rotation angle to a synthetic three-dimensional model of the finger surface. The generalization capability of neural networks offers a significant advantage by allowing processing a robust estimation of the orientation difference with a very limited need of computational resources with respect to traditional techniques. Experimental results show that the approach is feasible and can effectively enhance the recognition accuracy of single-camera biometric systems. On the evaluated dataset of 800 contactless images, the proposed method permitted to decrease the equal error rate of the used biometric system from 3.04% to 2.20%.