Purpose: Effectiveness of image-guided radiation therapy with precise dose delivery depends highly on accurate target localization, which may involve motion during treatment due to, e.g., breathing and drift. Therefore, it is important to track the motion and adjust the radiation delivery accordingly. Tracking generally requires reliable target appearance and image features, whereas in ultrasound imaging acoustic shadowing and other artifacts may degrade the visibility of a target, leading to substantial tracking errors. To minimize such errors, we propose a method based on so-called supporters, a computer vision tracking technique. This allows us to leverage information from surrounding motion for improving robustness of motion tracking on 2D ultrasound image sequences of the liver. Methods: Image features, potentially useful for predicting the target positions, are individually tracked and a supporter model capturing the coupling of motion between these features and the target is learned on-line. This model is then applied to predict the target position, when the target cannot be otherwise tracked reliably. Results: The proposed method was evaluated using the Challenge on Liver Ultrasound Tracking (CLUST)-2015 dataset. Leave-one-out cross validation was performed on the training set of 24 2D image sequences of each 1-5 minutes. The method was then applied on the test set (24 2D sequences), where the results were evaluated by the challenge organizers, yielding 1.04 mm mean and 2.26 mm 95%ile tracking error for all targets. We also devised a simulation framework to emulate acoustic shadowing artifacts from the ribs, which showed effective tracking despite the shadows. Conclusions: Results support the feasibility and demonstrate the advantages of using supporters. The proposed method improves its baseline tracker, which uses optic-flow and elliptic vessel models, and yields the state-of-the-art real-time tracking solution for the CLUST challenge.