Accurate navigation of an autonomous underwater vehicle is important for its reliable operation. However, this task is challenging due to limitations of radio wave propagation and poor visibility in the aquatic environment. Underwater navigation techniques based on analysing sonar images facilitated by machine learning have shown promising results. However, previously proposed techniques are still complicated for real-time applications. This paper investigates low complexity techniques for the motion estimation based on the use of images obtained by a sonar looking down to the seafloor. The sonar can use multiple beams within a field of view (FoV). Various configurations of beams are considered according to portions of the FoV covered and two estimation approaches are investigated. In one approach, the sonar images are directly processed by a deep learning (DL) network, whereas in the other, the images are converted into (reduced size) vectors before applying them to a DL network. The vector approach shows a significantly lower computation time (about 10 times faster), which makes it suitable for realtime applications. Both the approaches show a similar estimation accuracy, about 10% of the maximum magnitude of the motion. The vector technique has been used to estimate a simulated trajectory and compare the estimate with the ground truth, which showed a good match. It has also been applied to estimate the trajectory of an imaging sonar from a real data set from a ship's hull inspection. The estimated trajectory has successfully been used to build a mosaic by merging the sonar images from the real data set.