Advances in Unmanned Air Vehicle (UAV) technology have paved a way for numerous configurations and applications in communication systems. However, UAV dynamics play an important role in determining its effective use. In this article, while considering UAV dynamics, we evaluate the performance of a UAV equipped with a Mobile-Edge Computing (MEC) server that provides services to End-user Devices (EuDs). The EuDs due to their limited energy resources offload a portion of their computational task to nearby MEC-based UAV. To this end, we jointly optimize the computational cost and 3D UAV placement along with resource allocation subject to the network, communication, and environment constraints. A Deep Reinforcement Learning (DRL) technique based on a continuous action space approach, namely Deep Deterministic Policy Gradient (DDPG) is utilized. By exploiting DDPG, we propose an optimization strategy to obtain an optimal offloading policy in the presence of UAV dynamics, which is not considered in earlier studies. The proposed strategy can be classified into three cases namely; training through an ideal scenario, training through error dynamics, and training through extreme values. We compared the performance of these individual cases based on cost percentage and concluded that case II (training through error dynamics) achieves minimum cost i.e., 37.75 %, whereas case I and case III settles at 67.25% and 67.50% respectively. Numerical simulations are performed, and extensive results are obtained which shows that the advanced DDPG based algorithm along with error dynamic protocol is able to converge to near optimum. To validate the efficacy of the proposed algorithm, a comparison with state-of-the-art Deep Q-Network (DQN) is carried out, which shows that our algorithm has significant improvements.