Abstract—This paper focuses on enhancing the object detection and decision-making capabilities of Autonomous Emergency Medical Response Systems (AEMRS). Using advanced sensors, Convolutional Neural Networks (CNN), and reinforcement learn- ing, we propose a model that processes environmental data to identify obstacles and make optimal navigation decisions. The integration of these technologies aims to minimize response time and improve the efficiency of emergency medical services. The perception system utilizes radar, LiDAR, and camera inputs to create a comprehensive understanding of the environment. These data are processed through a series of CNN layers to detect and classify objects such as vehicles, pedestrians, and road signs. On the decision-making front, a Q-learning algorithm is employed to enable the ambulance to learn from its interactions with the environment, continuously improving its route planning and collision avoidance strategies. By combining these advanced AI techniques, the proposed AEMRS can significantly enhance the speed and reliability of emergency responses, ultimately saving more lives. This paper presents the design, implementation, and simulation results of the proposed system, demonstrating its potential to revolutionize emergency medical services. Index Terms—Object Detection, Convolutional Neural Net- works (CNN), Reinforcement Learning, Q-Learning, AI-Driven Ambulance, Autonomous Systems, Emergency Medical Services, Real-Time Navigation.