Porosity, as a key parameter to describe the properties of rock reservoirs, is essential for evaluating the permeability and fluid migration performance of underground rocks. In order to overcome the limitations of traditional logging porosity interpretation methods in the face of geological complexity and nonlinear relationships, the Dynamic Transformer model in machine learning was introduced in this study, aiming to improve the accuracy and generalization ability of logging porosity prediction. Dynamic Transformer is a deep learning model based on the self-attention mechanism. Compared with traditional sequence models, Dynamic Transformer has a better ability to process time series data and is able to focus on different parts of the input sequence in different locations, so as to better capture global information and long-term dependencies. This is a significant advantage for logging tasks with complex geological structures and time series data. In addition, the model introduces Dynamic Convolution Kernels to increase the model coupling, so that the model can better understand the dependencies between different positions in the input sequence. The introduction of this module aims to enhance the model's ability to model long-distance dependence in sequences, thereby improving its performance. We trained the model on the well log dataset to ensure that it has good generalization ability. In addition, we comprehensively compare the performance of the Dynamic Transformer model with other traditional machine learning models to verify its superiority in logging porosity prediction. Through the analysis of experimental results, the Dynamic Transformer model shows good superiority in the task of logging porosity prediction. The introduction of this model will bring a new perspective to the development of logging technology and provide a more efficient and accurate tool for the field of geoscience.