In order to handle the rising global energy demand and lower carbon emissions, hydrogen, a clean and sustainable energy source, is essential. The creation of hydrogen is significant because it has the potential to transform the energy industry by providing a sustainable alternative to conventional fossil fuels. Deep learning has been a potent tool in recent years, exhibiting outstanding performance and dependability in a variety of domains, including the prediction of hydrogen generation. The optimization of hydrogen production methods to increase their effectiveness and reduce costs has shown promise. The susceptibility of deep learning models to adversarial attacks, which can reduce the precision and dependability of their predictions, is a growing worry. Adversarial attacks entail the purposeful alteration of input data to trick machine learning algorithms and provide false results. Such attacks may have far-reaching effects on hydrogen production prediction, thereby compromising the efficiency, economic feasibility, and safety of processes. To address these concerns, we conducted an extensive investigation into the susceptibility of deep learning models used for hydrogen production prediction to adversarial attacks using the co-gasification of biomass and plastics datasets. In the co-gasification of biomass and plastics dataset, the dependent variable was the quantity of hydrogen generated, and the independent variables included the gasification temperature, high-density polyethylene (HDPE) and rubber seed shell (RSS) particle size, and the quantity of plastic in the final product. The implemented adversarial attacks include the limitedmemory broyden-fletcher-goldfarb-shanno (L-BFGS), fast gradient sign method (FGSM), basic iterative method, and projected gradient descent method (PGD). This study employed 4 machine learning regression models and a novel deep learning model based on Keras API to analyze the effect of the adversarial attack models under several perturbations including 0.1, 0.2, 0.4, 0.6 and 0.8. From the yielded result, it was evident that the FGSM and PGD adversarial attack has a significant influence on the employed model prediction results while the L-BFGS and the basic iterative method yielded results that will be addressed in our future works. Our research highlights the potential risks of relying on these models for decision-making in hydrogen production processes while also revealing the vulnerabilities of deep learning models in this crucial domain. We also highlight the significance of developing defense mechanisms and security protocols to protect the integrity of deep learning-based predictions in this crucial sector.