Wide-area damping control for inter-area oscillation (IAO) is critical to modern power systems. The recent breakthroughs in deep learning and the broad deployment of phasor measurement units (PMU) promote the development of datadriven IAO damping controllers. In this paper, the damping control of IAOs is modeled as a Markov Decision Process (MDP) and solved by the proposed Deep Deterministic Policy Gradient (DDPG) based deep reinforcement learning (DRL) approach. The proposed approach optimizes the eigenvalue distribution of the system, which determines the IAO modes in nature. The eigenvalues are evaluated by the data-driven method called dynamic mode decomposition. For a given power system, only a subset of generators selected by participation factors needs to be controlled, alleviating the control and computing burdens. A Switching Control Strategy (SCS) is introduced to improve the transient response of IAOs. Numerical simulations of the IEEE-39 New England power grid model validate the effectiveness and advanced performance of the proposed approach as well as its robustness against communication delays. In addition, we demonstrate the transfer ability of the DRL model trained on the linearized power grid model to provide effective IAO damping control in the non-linear power grid model environment.