Many problems related to natural language processing are solved by neural networks and big data. Researchers have previously focused on single-task supervised goals with limited data management to train slogan classification. A multi-task learning framework is used to learn jointly across several tasks related to generating multi-class slogan types. This study proposes a multi-task model named slogan generative adversarial network systems (Slo-GAN) to enhance coherence and diversity in slogan generation, utilizing generative adversarial networks and recurrent neural networks (RNN). Slo-GAN generates a new text slogan-type corpus, and the training generalization process is improved. We explored active learning (AL) and meta-learning (ML) for dataset labeling efficiency. AL reduced annotations by 10% compared to ML but still needed about 70% of the full dataset for baseline performance. The whole framework of Slo-GAN is supervised and trained together on all of these tasks. The text with the higher reporting score level is filtered by Slo-GAN, and a classification accuracy of 87.2% is achieved. We leveraged relevant datasets to perform a cross-domain experiment, reinforcing our assertions regarding both the distinctiveness of our dataset and the challenges of adapting bilingual dialects to one another.