A novel framework of reconfigurable intelligent surfaces (RISs)-enhanced indoor wireless networks is proposed, where an RIS mounted on the robot is invoked to enable mobility of the RIS and enhance the service quality for mobile users. Meanwhile, non-orthogonal multiple access (NOMA) techniques are adopted to further increase the spectrum efficiency since RISs are capable to provide NOMA with artificial controlled channel conditions, which can be seen as a beneficial operation condition to obtain NOMA gains. To optimize the sum rate of all users, a deep deterministic policy gradient (DDPG) algorithm is invoked to optimize the deployment and phase shifts of the mobile RIS as well as the power allocation policy. In order to improve the efficiency and effectiveness of agent training for the DDPG agents, a federated learning (FL) concept is adopted to enable multiple agents to simultaneously explore similar environments and exchange experiences. We also proved that with the same random exploring policy, the FL armed deep reinforcement learning (DRL) agents can theoretically obtain a reward gain compare to the independent agents. Our simulation results indicate that the mobile RIS scheme can significantly outperform the fixed RIS paradigm, which provides about three times data rate gain compare to the fixed RIS paradigm. Moreover, the NOMA scheme is capable to achieve a gain of 42% in contrast with the OMA scheme in terms of sum rate. Finally, the multi-cell simulation proved that the FL enhanced DDPG algorithm has a superior convergence rate and optimization performance than the independent training framework.Index terms-Deep reinforcement learning (DRL), federated learning (FL), intelligent reflecting surfaces (IRSs), non-orthogonal multiple access (NOMA), reconfigurable intelligent surfaces (RIS), resource management