Recent discriminative trackers especially based on Correlation Filters (CFs) have shown dominant performance for visual tracking. This kind of trackers benefit from multi-resolution deep features a lot, taking the expressive power of deep Convolutional Neural Networks (CNN). However, distractors in complex scenarios, such as similar targets, occlusion, and deformation, lead to model drift. Meanwhile, learning deep features results in feature redundancy that the increasing number of learning parameters introduces the risk of over-fitting. In this paper, we propose a discriminative CFs based visual tracking method, called dimension adaption correlation filters (DACF). First, the framework adopts the multichannel deep CNN features to obtain a discriminative sample appearance model, resisting the background clutters. Moreover, a dimension adaption operation is introduced to reduce relatively irrelevant parameters as possible, which tackles the issue of over-fitting and promotes the module effectively adapting to different tracking scenes. Furthermore, the DACF formulation optimization can be efficiently performed on the basis of implementing the alternating direction method of multipliers (ADMM). Extensive evaluations are conducted on benchmarks, including OTB2013, OTB2015, VOT2016, and UAV123. The experiments results show that our tracker gains remarkable performance. Especially, DACF obtains an AUC score of 0.698 on OTB2015.INDEX TERMS Correlation filters, multi-channel feature learning, object tracking.