Sixth generation (6G) in-X subnetworks are recently proposed as short-range low-power radio cells for supporting localized extreme wireless connectivity inside entities such as industrial robots, vehicles, and the human body. The deployment of in-X subnetworks in these entities may lead to fast changes in the interference level and hence, varying risks of communication failure. In this paper, we investigate fully distributed resource allocation for interference mitigation in dense deployments of 6G in-X subnetworks. Resource allocation is cast as a multiagent reinforcement learning problem and agents are trained in a simulated environment to perform channel selection with the goal of maximizing the per-subnetwork rate subject to a target rate constraint for each device. To overcome the slow convergence and performance degradation issues associated with fully distributed learning, we adopt a centralized training procedure involving local training of a deep Q-network (DQN) at a central location with measurements obtained at all subnetworks. The policy is implemented using Double Deep Q-Network (DDQN) due to its ability to enhance training stability and convergence. Performance evaluation results in an in-factory environment indicated that the proposed method can achieve up to 19% rate increase relative to random allocation and is only marginally worse than complex centralized benchmarks.