In the current field of vulnerability prediction, accurate forecasting and identification of potential vulnerabilities in software are crucial, especially when dealing with real-world vulnerability data. The challenges posed by data imbalance and complex dependency relationships often make prediction tasks exceptionally difficult. Traditional single-task learning methods and ensemble learning methods typically perform poorly when handling highly imbalanced datasets. These methods often overlook minority categories, which frequently contain the most critical vulnerabilities, and fail to fully learn vulnerability features. To address these issues, we propose a novel multi-task learning model called MTLPT, aimed at enhancing the accuracy and efficiency of vulnerability prediction through a multi-task learning framework. The MTLPT model combines custom lightweight Transformer blocks and position encoding layers to effectively capture long-range dependencies and complex contextual information from source code. With this structural design, MTLPT can simultaneously handle various vulnerability prediction tasks, thereby learning the latent relationships between different vulnerability types and improving the model’s sensitivity to rare but severe vulnerabilities. Additionally, the MTLPT model introduces a loss function based on dynamic weights, which dynamically adjusts loss weights based on the prediction difficulty of different tasks, effectively mitigating the challenges posed by imbalanced data. We conducted comparative experiments on a subset of highly imbalanced real-world vulnerability dataset. The experimental results demonstrate that, compared to existing single-task learning and ensemble learning methods, MTLPT exhibits significant advantages across multiple key performance metrics, particularly in identifying minority class vulnerabilities with higher sensitivity and accuracy. This performance enhancement validates the effectiveness of our proposed multi-task learning framework in handling complex and imbalanced vulnerability data, highlighting the importance of the MTLPT framework, custom lightweight Transformer blocks, position encoding layers, and the dynamic weight loss function in practical applications. Furthermore, we conducted a series of ablation experiments to thoroughly evaluate the contributions of individual components within the MTLPT model, confirming the role of custom lightweight Transformer blocks and position encoding layers in enhancing the model’s ability to learn complex code structures and behavioral patterns, while also demonstrating the critical role of the dynamic weight loss function in optimizing the training process of the multi-task learning model.