Directly combining multi-modal features can result in overlooking the complementary information between different modalities and disregarding their interaction, given above, this paper proposes an approach for face anti-spoofing driven by the multi-modal confidence constraints and adaptive feature weighting. The first step is to utilize a multi-modal confidence constraint (MCC) loss, which helps to better regulate the fusion features and prevent overfitting. The MCC loss contributes to the loss of each mode and applies modulation constraints to enhance the accuracy of prediction probabilities. To fully utilize the discrimi-nant information and reduce interference factors, an adaptation feature weighting (AFW) strategy is further employed to dynamically assign the prediction probabilities between modalities. Finally, the fused features are classified using a binary cross-entropy loss function. To wrap up, the paper conducts qualitative and quantitative experiments on two public datasets: CASIA-SURF and CASIA-SURF CeFA. The results show that the approach is effective, achieving average error rates of 0.068% and 2.160% on CASIA-SURF and CASIA-SURF CeFA, respectively.