Efficient path planning optimization strategies are required to maximize flying time while consuming the least energy. This research offers a novel approach for energy-efficient path planning for Unmanned Aerial Vehicles (UAVs) that combines a hybrid evolutionary algorithm and Q-learning while accounting for the UAV's velocity and distance from obstacles. To overcome the constraints of traditional optimization approaches, the hybrid methodology combines genetic algorithms and Q-learning. The suggested approach optimizes path-planning decisions based on realtime information by considering the UAV's velocity and distance from obstacles. Genetic Algorithm (GA) creates a wide collection of candidate pathways. In contrast, Q-learning uses reinforcement learning to make educated selections based on the UAV's present velocity and proximity to static obstacles. This integration allows the UAV to modify its path dynamically based on its energy requirements and environmental constraints. The main goal is to develop a UAV path planning scheme capable of dealing with obstacle-filled environments to improve energy efficiency and collision avoidance during flight missions. Our experimental results show that the hybrid technique outperforms the classical GA method in terms of energy efficiency by significantly reducing energy consumption while maintaining a suitable collision rate and the best path cost to the desired locations. The analysis results improve the performance of the hybrid GA/QL algorithm by more than 57.14% compared to classical GA.