Recently, Discriminative Correlation Filters (DCF) have shown excellent performance in visual object tracking. The correlation for a computing response map can be conducted efficiently in Fourier domain by Discrete Fourier Transform (DFT) of inputs, where the DFT of an image has symmetry on the Fourier domain. To enhance the robustness and discriminative ability of the filters, many efforts have been devoted to optimizing the learning process. Regularization methods, such as spatial regularization or temporal regularization, used in existing DCF trackers aim to enhance the capacity of the filters. Most existing methods still fail to deal with severe appearance variations—in particular, the large scale and aspect ratio changes. In this paper, we propose a novel framework that employs adaptive spatial regularization and temporal regularization to learn reliable filters in both spatial and temporal domains for tracking. To alleviate the influence of the background and distractors to the non-rigid target objects, two sub-models are combined, and multiple features are utilized for learning of robust correlation filters. In addition, most DCF trackers that applied 1-dimensional scale space search method suffered from appearance changes, such as non-rigid deformation. We proposed a 2-dimensional scale space search method to find appropriate scales to adapt to large scale and aspect ratio changes. We perform comprehensive experiments on four benchmarks: OTB-100, VOT-2016, VOT-2018, and LaSOT. The experimental results illustrate the effectiveness of our tracker, which achieved a competitive tracking performance. On OTB-100, our tracker achieved a gain of 0.8% in success, compared to the best existing DCF trackers. On VOT2018, our tracker outperformed the top DCF trackers with a gain of 1.1% in Expected Average Overlap (EAO). On LaSOT, we obtained a gain of 5.2% in success, compared to the best DCF trackers.