Dynamic distortion is one of the most critical factors affecting the experience of
automotive augmented reality head-up displays (AR-HUDs). A wide range
of views and the extensive display area result in extraordinarily
complex distortions. Existing methods based on the neural network
first obtain distorted images and then get the predistorted data for
training mostly. This paper proposes a distortion prediction framework
based on the neural network. It directly trains the network with the
distorted data, realizing dynamic adaptation for AR-HUD distortion
correction and avoiding errors in coordinate interpolation.
Additionally, we predict the distortion offsets instead of the
distortion coordinates and present a field of view (FOV)-weighted loss
function based on the spatial-variance characteristic to further
improve the prediction accuracy of distortion. Experiments show that
our methods improve the prediction accuracy of AR-HUD dynamic
distortion without increasing the network complexity or data
processing overhead.