To support network agility and dynamism, the Distributed Analytics and Information Sciences (DAIS) International Technology Alliance (ITA) https://dais-ita.org/pub has introduced a new architecture called Software Defined Coalitions (SDC), which significantly extends the Software Defined Networking (SDN) to include communication, computation, storage, database and sensor resources. Reinforcement Learning (RL) has been shown to be very effective for managing SDC. However, due to link failure or operational requirements, SDC may become fragmented and reconnected again over time. This paper aims to investigate how RL can be made robust and efficient by transfer learning (TL) in presence of SDC fragmentation.For illustration, we consider an SDC with two domains, which allocate analytic tasks to data servers for processing. Each domain has a local RL agent to distribute tasks to servers within the domain. When the SDC is formed, a global RL agent is established to interact with the two local agents so that tasks can now be allocated to servers anywhere across the SDC for efficiency.Our objective here is two-folded. First, training the local RL agent is challenging due to the space-action explosion. We adopt and show how a newly developed method that separates state from action space can improve training. Second, we develop a TL technique to train the global RL agent, which can significantly reduce the amount of time required for achieving close to the optimal performance after the SDC domains are reconnected following fragmentation. As a result, the combined RL-TL technique enables efficient and robust management and control of SDC despite fragmentation.