Decomposition methods have been proposed to approximate solutions to large sequential decision making problems. In contexts where an agent interacts with multiple entities, utility decomposition can be used to separate the global objective into local tasks considering each individual entity independently. An arbitrator is then responsible for combining the individual utilities and selecting an action in real time to solve the global problem. Although these techniques can perform well empirically, they rely on strong assumptions of independence between the local tasks and sacrifice the optimality of the global solution. This paper proposes an approach that improves upon such approximate solutions by learning a correction term represented by a neural network. We demonstrate this approach on a fisheries management problem where multiple boats must coordinate to maximize their catch over time as well as on a pedestrian avoidance problem for autonomous driving. In each problem, decomposition methods can scale to multiple boats or pedestrians by using strategies involving one entity. We verify empirically that the proposed correction method significantly improves the decomposition method and outperforms a policy trained on the full scale problem without utility decomposition. decomposition methods with deep corrections for reinforcement learning 2 bined in real-time through an arbitrator that maximizes the expected utility [22]. Applications to aircraft collision avoidance with multiple intruders explore summing or using the minimum state-action values [7], [20]. While these approaches tend to perform well empirically, they sacrifice optimality. The solution to each subproblem assumes that its individual policy will be followed regardless of other entities, in contrast to the global policy considering all subproblems [23].Previous approaches to scale decision algorithms using decomposition methods relied on a distributed agent architecture [22]. Each agent is responsible for addressing one of the multiple objectives required to achieve a complex task. At each time step an arbitrator must decide between the different actions recommended by these agents and address possible conflicts. Possible arbitration strategies are command fusion [7], voting [22], lexicographic ordering [33] or utility fusion [22], [23]. It has been shown that utility fusion offers a more principled way of deciding between the individual agents compared to a voting-based approach or command fusion [22]. An alternative approach in leader follower scenarios consists of reducing the problem of controlling the group of agent to controlling a single group leader [15]. Other approaches rely on distributed learning algorithms, such as independent Q-learning, where each agent learns a policy without being aware of the other agents actions [28]. The underlying assumption of independence between agents in these algorithms trades off the benefit of collaboration for an easier learning of the task [8].Utility decomposition methods have been used in many practic...