Several adaptive visual tracking algorithms have been recently proposed to capture the varying appearance of target. However, adaptability may also result in the problem of gradual drift, especially when the target appearance changes drastically. This paper gives some theoretical principles for online learning of target model, and then presents a novel adaptive tracking algorithm which is able to effectively cope with drastic variations in target appearance and resist gradual drift. Once target is localized in each frame, the patches sampled from target observation are first classified into foreground and background using an effective classifier. Then the adaptive, pure and timecontinuous target model is extracted online through two processes: absorption process and rejection process, through which only the reliable features with high separability are absorbed in the new target model, while the "dangerous" features which may cause interfusion of background patterns are rejected. To minimize the influence of background and keep the temporal continuity of target model, two collaborative models dominant model and continuous model are designed. The proposed learning and generation mechanisms of target model are finally embedded in an adaptive tracking system. Experimental results demonstrate the robust performance of the proposed algorithm under challenging conditions.