Transfer learning in Reinforcement Learning (RL) has been widely studied to overcome training challenges in Deep-RL, i.e., exploration cost, data availability and convergence time, by bootstrapping external knowledge to enhance learning phase. While this overcomes the training issues on a novice agent, a good understanding of the task by the expert agent is required for such a transfer to be effective. As an alternative, in this paper we propose Expert-Free Online Transfer Learning (EF-OnTL), an algorithm that enables expert-free real-time dynamic transfer learning in multi-agent system. No dedicated expert agent exists, and transfer source agent and knowledge to be transferred are dynamically selected at each transfer step based on agents’ performance and level of uncertainty. To improve uncertainty estimation, we also propose State Action Reward Next-State Random Network Distillation (sars-RND), an extension of RND that estimates uncertainty from RL agent-environment interaction. We demonstrate EF-OnTL effectiveness against a no-transfer scenario and state-of-the-art advice-based baselines, with and without expert agents, in three benchmark tasks: Cart-Pole, a grid-based Multi-Team Predator-Prey (MT-PP) and Half Field Offense (HFO). Our results show that EF-OnTL achieves overall comparable performance to that of advice-based approaches, while not requiring expert agents, external input, nor threshold tuning. EF-OnTL outperforms no-transfer with an improvement related to the complexity of the task addressed.