The widespread adoption of electric vehicles (EVs) has introduced new challenges for stakeholders ranging from grid operators to EV owners. A critical challenge is to develop an effective and economical strategy for managing EV charging while considering the diverse objectives of all involved parties. In this study, we propose a context-aware EV smart charging system that leverages deep reinforcement learning (DRL) to accommodate the unique requirements and goals of participants. Our DRL-based approach dynamically adapts to changing contextual factors such as time of day, location, and weather to optimize charging decisions in real time. By striking a balance between charging cost, grid load reduction, fleet operator preferences, and charging station energy efficiency, the system offers EV owners a seamless and cost-efficient charging experience. Through simulations, we evaluate the efficiency of our proposed Deep Q-Network (DQN) system by comparing it with other distinct DRL methods: Proximal Policy Optimization (PPO), synchronous Advantage Actor-Critic (A3C), and Deep Deterministic Policy Gradient (DDPG). Notably, our proposed methodology, DQN, demonstrated superior computational performance compared to the others. Our results reveal that the proposed system achieves a remarkable, approximately 18% enhancement in energy efficiency compared to traditional methods. Moreover, it demonstrates about a 12% increase in cost-effectiveness for EV owners, effectively reducing grid strain by 20% and curbing CO2 emissions by 10% due to the utilization of natural energy sources. The system's success lies in its ability to facilitate sequential decision-making, decipher intricate data patterns, and adapt to dynamic contexts. Consequently, the proposed system not only meets the efficiency and optimization requirements of fleet operators and charging station maintainers but also exemplifies a promising stride toward sustainable and balanced EV charging management.