Quantum Machine Learning (QML) holds the promise of making significant changes to how Artificial Intelligence (AI) functions. A notable breakthrough in this field is the development of Quantum Long Short- Term Memory (QLSTM) networks, which exhibit faster learning compared to standard Long Short-Term Memory (LSTM) networks. QLSTM builds upon classical LSTM networks but incorporates principles of quantum computation, allowing it to explore superposition and entanglement for parallel information processing. By using quantum bits (qubits), QLSTM potentially offers advantages for specific machine learning tasks. Despite its potential, QLSTM remains a relatively unexplored area within QML, particularly regarding its dependency on the number of qubits and the impact of different optimizers on its performance. This study aims to reveal valuable insights to enhance QLSTM and explore its potential applications in AI. The study focuses on two main aspects. Firstly, exploring how the Adam and Ada-delta optimization methods improve QLSTM models, playing a role in refining the performance of the quantum model. Secondly, the study delves into the consequences of using varying numbers of qubits in the QLSTM model. Understanding how the quantity of qubits influences performance is essential for optimizing the network's efficiency. By addressing these aspects, the study seeks to contribute to advancing QLSTM and offering valuable knowledge for its potential applications in the AI field. The findings may suggest exciting opportunities for future research on QLSTM models, prompting exploration into optimization strategies, regularization techniques, and quantum data augmentation. Additionally, the study emphasizes the importance of leveraging ensemble learning and validating applicability in real-world quantum computing scenarios. These efforts are crucial for advancing understanding and maximizing potential in the continually evolving landscape of quantum computing applications.