Transfer has been widely used to ameliorate the slow convergence speed of reinforcement learning (RL) by reusing the previous obtained knowledge from other related but distinct tasks. In this paper, we propose a framework to transfer knowledge learned directly from human-demonstration trajectories of source tasks to shape the RL algorithm in target task, so as to avoid the time-consuming training process of RL in source tasks and thus we expand the learning paradigm of transfer in RL domains. In our framework, rather than transferring the most common value function or policy, we adopt the visit frequencies of states in successful demonstration trajectories as the acquired knowledge, and then perform transfer via shared agent space. Simulation experiments in obstacle avoidance problems suggest that the transferred knowledge could accelerate the learning process in target task obviously. And as a case study, the experiments show the potential of our framework in knowledge transfer in RL tasks.