The development of the 5G network and the transition to 6G has given rise to multiple challenges for ensuring high-quality and reliable network services. One of these main challenges is the emergent intelligent defined networks (IDN), designed to provide highly efficient connectivity, by merging artificial intelligence (AI) and networking concepts, to ensure distributed intelligence over the entire network. To this end, it will be necessary to develop and implement proper machine learning (ML) algorithms that take into account this new distributed nature of the network to represent increasingly dynamic, adaptable, scalable, and efficient systems. To be able to cope with more stringent service requirements, it is necessary to renew the ML approaches to make them more efficient and faster. Distributed learning (DL) approaches are shown to be effective in enabling the possibility of deploying intelligent nodes in a distributed network. Among several DL approaches, transfer learning (TL) is a valid technique to achieve the new objectives required by emerging networks. Through TL, it is possible to reuse ML models to solve new problems without having to recreate a learning model from scratch. TL, combined with distributed network scenarios, turns out to be one of the key technologies for the advent of this new era of distributed intelligence. The goal of this paper is to analyze TL performance in different networking scenarios through a proper MATLAB implementation.