Object tracking based on visible images may fail when the visible images are unreliable, for example when the illumination condition is poor. Infrared images reveal thermal radiation of objects and are insensitive to these factors. Due to the complementary features of visible and infrared images, RGB-infrared fusion tracking has attracted widespread attention recently. In this paper, an RGB-infrared fusion tracking method based on the fully convolutional Siamese Networks, termed as SiamFT, is proposed. Visible and infrared images are firstly processed by two Siamese Networks, namely visible network and infrared network, respectively. Then, convolutional features of visible and infrared template images extracted from two Siamese Networks are concatenated to form fused template image. Convolutional features of visible and infrared search image are fused through the proposed feature fusion network adaptively. In particular, a modality weight computation method based on the response value of Siamese network is proposed to predict the reliability of different images. Cross-relation is then applied to the fused template feature and the fused search feature to produce the final response map, based on which the tracking results can be obtained. Extensive experiments indicate that the proposed SiamFT shows better performance than thestate-of-art fusion tracking algorithms at real-time speed. INDEX TERMS Deep learning, fusion tracking, object tracking, Siamese network.