As the global pandemic gradually eases and the aviation transport industry continues to experience steady growth, highdensity flight operations are becoming the new normal. The intelligentization of flight support processes is a crucial avenue for enhancing both the safety and efficiency of flight operations. With the advancement of computer vision technology, video-based object tracking has shown significant potential in the context of flight support processes. However, in real airport environments, object tracking often encounters challenges such as occlusion, scale variations, rotation, and changes in lighting conditions, leading to a decrease in tracking accuracy and even target loss. In this paper, our focus is on overcoming tracking failures caused by occlusion, deformation, and lighting variations. We have conducted the following work, taking into consideration the unique characteristics of airport environments and the specific requirements of flight support processes: (i) We utilized features at three levels, namely, Histogram of Oriented Gradient (HOG), Color Names, and Convolutional Neural Networks (CNN), to describe the texture, color, and high-level semantics of video images, respectively. (ii) We employed a multi-feature fusion approach using a trilinear interpolation function to integrate information from various sources. (iii) We implemented improved ECO algorithms for the tracking of moving objects in the airport environment. Finally, we validated this object tracking system using real surveillance videos from the airport. Experimental results have demonstrated the effectiveness and practicality of the method under challenging conditions.