In the evolving automobile industry, Adaptive Cruise Control (ACC) is key for aiding autonomous traffic navigation. Ideal ACC systems can decelerate to low speeds in stop-and-go traffic, maintain a safe following distance, minimize rear-end collision risks, and lessen the driver's need to continually adjust vehicle's speed to match traffic flow. In this paper, we offer a Deep Reinforcement Learning-based adaptive cruise control (DRL-ACC) system that creates safe, flexible, and responsive car-following policies agents. Instead of using discrete incremental and decremental values or a continuous action space, we suggest constructing a discrete high-level action space to accelerate, decelerate, and hold the current speed. We also provide a comprehensive, easyto-interpret multi-objective reward function that reflects safe, responsive, and rational traffic behavior. This strategy, trained on a single steady-state flow car-following scenario, promotes steadiness, responsiveness, and shows better generalization to diverse car-following scenarios. Results are also compared to the conventional Intelligent Driver Model (IDM). We further explore the model's potential to avoid rear-end collisions and facilitate future integration of lane-change maneuvers, which will increase its effectiveness in emergency situations.