Visual tracking is complicated due to factors such as occlusion, background clutter, abrupt target motion, and illumination variations, among others. In recent years, subspace representation and sparse coding techniques have demonstrated significant improvements in tracking. However, performance gain in tracking has been at the expense of losing locality and similarity attributes among the instances to be encoded. In this paper, a Graph Regularized and Locality-constrained Coding (GRLC) technique that encapsulates local manifold structure of the data in order to preserve locality and similarity information among instances, is proposed. The GRLC methodology incorporates a similarity-preserving term within the objective function of the locality-constrained linear coding model, thereby overcoming some of the inherent instability issues common to such coding methods. In the proposed GRLC scheme, a graph Laplacian regularizer is chosen as a smoothing operator in order to learn both the representation dictionary and the coefficients by preserving the local structure of the data. This graph Laplacian smoothing operator ensures that the representations vary smoothly along the geodesics of the data manifold. Thus by deriving the objective function of the GRLC method, a discriminative dictionary of instances can be iteratively obtained and the corresponding coefficients for each candidate can be computed using this learned dictionary. Finally, an effective observation likelihood function based on reconstruction error and a simple dictionary update scheme for visual target tracking are also proposed. Experimental results on the CVPR2013 Visual Tracker Benchmark have demonstrated favorable performance of the proposed technique both in terms of accuracy and robustness.