This paper presents a novel human-centered collaborative driving scheme using model-free reinforcement learning (RL) approach. The human-machine cooperation is achieved in both decision-making and steering control levels to improve driving safety while leaving space for human freedom as much as possible. A Markov decision process is firstly derived from the collaborative driving problem, then a RL agent is developed and trained to cooperatively control the vehicle steering under the guidance of a heuristic reward function. Twin delayed deep deterministic policy gradient (TD3) is conducted to attain the optimal control policy. In addition, two extended algorithms with distinct agent action definitions and training patterns are also devised. The effectiveness of the RL-based copilot system is finally validated in an obstacle avoidance scenario by simulation experiments. Driving performance and training efficiency of different RL agents are measured and compared to demonstrate the superiority of the proposed method.