Deep learning is a breakthrough in machine learning research. It aims to establish a deep network structure that can simulate the human brain for analysis and learning, interpret data through the mechanism of layer-by-layer abstract feature representation, and has excellent feature learning capabilities. According to the input-output performance evaluation data of colleges and universities, three experiments are done. First, the feature expression ability of RBM, the basic building block of deep learning, is studied, and compared with PCA, the results show that RBM-fine-tuning has better performance than PCA-expressed classifier; the reconstruction error can be used to judge the hidden layer. As the number of RBM layers increases, the classification accuracy gradually increases, indicating the feasibility of the RBMs feature extractor. Second, the model in this study has a higher prediction accuracy than other classification models and clarifies the effectiveness of the modular deep learning model based on RBMs from the perspectives of network convergence analysis and network output analysis. The ability is stronger than DBN, and the obtained abstract feature representation is more conducive to classification. Although the classification accuracy rate of the model in this study has been improved, the model has certain limitations. The network initialization is still set based on experiments and experience, and the prediction accuracy rate is only 88.3%, which needs to be improved. The parameter training algorithm of RBMs can be further studied. To improve the more accurate reference basis for the performance evaluation of colleges and universities. Third, in the research of dynamical systems, the stability of the time-delay unified system at zero equilibrium and positive equilibrium is studied, and the conditions for generating the Hopf branch are given. At the same time, some conclusions are obtained through theoretical analysis. Numerical simulations further verify the validity of the theoretical results.