Deep learning has shown remarkable success in various applications, such as image classification, natural language processing, and speech recognition. However, training deep neural networks is challenging due to their complex architecture and the number of parameters required. Genetic algorithms have been proposed as an alternative optimization technique for deep learning, offering an efficient alternative way to find an optimal set of network parameters that minimize the objective function. In this paper, we propose a novel approach integrating genetic algorithms with deep learning, specifically LSTM models, to enhance performance. Our method optimizes crucial hyper-parameters including learning rate, batch size, neuron count per layer, and layer depth through genetic algorithms. Additionally, we conduct a comprehensive analysis of how genetic algorithm parameters influence the optimization process and illustrate their significant impact on improving LSTM model performance. Overall, the presented method provides a powerful mechanism for improving the performance of deep neural networks, and; thus, we believe that it has significant potential for future applications in the artificial intelligence discipline.