Music composition, an intricate blend of human creativity and emotion, presents substantial challenges when generating melodies from lyrics which hinders effective learning in neural networks and the inadequate depiction of harmonic structure that fails to encapsulate the complex relationships between lyrics and melodies. The existing methods often struggle to balance emotional depth and structural coherence, leading to compositions that lack both the intended emotional resonance and musical consistency. To overcome these issues, this research introduces a novel approach named Dual Interactive Wasserstein Fourier Acquisitive Generative Adversarial Network (DIWFA-GAN), which integrates innovative techniques like swish activation functions and the Giant Trevally Optimizer (GTO) for parameter optimization. Meanwhile, the GTO, inspired by the movement patterns of the Giant Trevally fish, provides efficient and effective parameter optimization, improving the model’s convergence speed and accuracy. Comparative analysis against recent existing models reveals superior performance for both the LMD-full MIDI and Reddit MIDI datasets, with impressive metrics including inception scores of 9.36 and 2.98, Fréchet inception distances of 35.29 and 135.54 and accuracies of 99.98% and 99.95%, respectively. The DIWFA-GAN significantly outperforms existing models in generating high-fidelity melodies, as evidenced by superior inception scores, Fréchet inception distances, and accuracies on both datasets.