Ultrasound (US) scans of inferior vena cava (IVC) are widely adopted by healthcare providers to assess patients’ volume status. Unfortunately, this technique is extremely operator dependent. Recently, new techniques have been introduced to extract stable and objective information from US images by automatic IVC edge tracking. However, these methods require prior interaction with the operator, which leads to a waste of time and still makes the technique partially subjective. In this paper, two deep learning methods, YOLO (You only look once) v4 and YOLO v4 tiny networks, commonly used for fast object detection, are applied to identify the location of the IVC and to recognise the either long or short axis view of the US scan. The output of these algorithms can be used to remove operator dependency, to reduce the time required to start an IVC analysis, and to automatically recover the vein if it is lost for a few frames during acquisition. The two networks were trained with frames extracted from 18 subjects, labeled by 4 operators. In addition, they were also trained on a linear combination of two frames that extracted information on both tissue anatomy and movement. We observed similar accuracy of the two models in preliminary tests on the entire dataset, so that YOLO v4 tiny (showing much lower computational cost) was selected for additional cross-validation in which training and test frames were taken from different subjects. The classification accuracy was approximately 88% when using original frames, but it reached 95% when pairs of frames were processed to also include information on tissue movements, indicating the importance of accounting for tissue motion to improve the accuracy of our IVC detector.