This work examines the use of deep Reinforcement Learning (RL) in mass-spring system position control, providing a fresh viewpoint that goes beyond conventional control techniques. Mass-spring systems are widely used in many sectors and are basic models in control theory. The novel aspect of this approach is the thorough examination of the impact of several optimizer algorithms on the RL methodology, which reveals the optimal control tactics. The research applies a Deep Deterministic Policy Gradient (DDPG) algorithm for continuous action spaces, where the actor and critic networks are important components in assessing the agent's performance. The RL agent is trained to follow a reference trajectory using the Simulink environment for system modeling. The study provides insights into the agent's learning approach and performance optimization by evaluating the training process using force-time graphs, reward graphs, and Episode Manager charts. Furthermore, the effect of different combinations of optimizers on the control performance of the agent is examined. The outcomes highlight the importance of optimizer selection in the learning process by revealing significant variations in training times. As a result, a better understanding of the relationship between various optimizers and control performance is provided by this study's novel application of reinforcement learning in mass-spring system control. The results raise the possibility of more potent methods for controlling complex systems and add to the expanding field of study at the interface of control theory and deep learning.