In the realm of natural language processing, tasks like emotion recognition, irony detection, hate speech detection, offensive language identification, and stance detection are pivotal for understanding user-generated content. While several task-specific and multitask learning models have been proposed, there remains a need for a unified framework that can effectively address these tasks simultaneously. This research introduces a novel unified framework designed to tackle multiple NLP tasks concurrently, aiming to outperform existing task-specific and multitask models in terms of accuracy, F1-score, and AUC-ROC. We compared our proposed framework against several baseline models, including task-specific models like SVM, RF, LSTM, CNN, and BERT, as well as multitask learning frameworks such as Hard Parameter Sharing, Soft Parameter Sharing, Cross-stitch Networks, MMoE, and T5. The performance was evaluated across various tasks, and statistical significance was assessed using the Wilcoxon signed-rank test. Additionally, an ablation study was conducted to determine the contribution of individual components within our proposed method. The proposed framework consistently outperformed other models across all tasks. For instance, in emotion recognition, our model achieved an accuracy of 0.899, F1-score of 0.883, and AUC-ROC of 0.971, surpassing all baseline models. The Wilcoxon signed-rank test further confirmed the statistical superiority of our model over the baselines across all datasets.