Objective. Recently, imaging genomics has increasingly shown great potential for predicting postoperative recurrence of lung cancer patients. However, prediction methods based on imaging genomics have some disadvantages such as small sample size, high-dimensional information redundancy and poor multimodal fusion efficiency. This study aim to develop a new fusion model to overcome these challenges. Approach. In this study, a dynamic adaptive deep fusion network (DADFN) model based on imaging genomics is proposed for predicting recurrence of lung cancer. In this model, the 3D spiral transformation is used to augment the dataset, which better retains the 3D spatial information of the tumor for deep feature extraction. The intersection of genes screened by LASSO, F-test and CHI-2 selection methods is used to eliminate redundant data and retain the most relevant gene features for the gene feature extraction. A dynamic adaptive fusion mechanism based on the cascade idea is proposed, and multiple different types of base classifiers are integrated in each layer, which can fully utilize the correlation and diversity between multimodal information to better fuse deep features, handcrafted features and gene features. Main results. The experimental results show that the DADFN model achieves good performance, and its accuracy and AUC are 0.884 and 0.863, respectively. This indicates that the model is effective in predicting lung cancer recurrence. Significance. The proposed model has the potential to help physicians to stratify the risk of lung cancer patients and can be used to identify patients who may benefit from a personalized treatment option.