“…In the landscape of generation tasks, recent studies have employed diverse models to address the challenges posed by different datasets, showcasing notable advancements in the Table 2 AraBERT (Target/BERT), as examined in [31], and demonstrated its effectiveness in the generation task when applied to the Saaq al-Bambuu dataset, achieving a precision (P) of 0.848, a recall (R) of 0.823, and an F1-score of 0.879. The study in [33] introduced AraT5, utilizing the XSum and OrangeSum datasets, which showed signi cant improvements, with R1, R2, and BLEU values of 7.5, 18.30, and an undetermined value, respectively. Furthermore, the application of mT5 to the APGC dataset, as discussed in [34], resulted in a precision of 71.6%, an F1-score of 0.820, and a BLEU score of 97.5.…”