Balancing the workload of sophisticated simulations is inherently difficult, since we have to balance both computational workload and memory footprint over meshes that can change any time or yield unpredictable cost per mesh entity, while modern supercomputers and their interconnects start to exhibit fluctuating performance. We propose a novel lightweight balancing technique for MPI+X to accompany traditional, prediction-based load balancing. It is a reactive diffusion approach that uses online measurements of MPI idle time to migrate tasks temporarily from overloaded to underemployed ranks. Tasks are deployed to ranks which otherwise would wait, processed with high priority, and made available to the overloaded ranks again. This migration is nonpersistent. Our approach hijacks idle time to do meaningful work and is totally nonblocking, asynchronous and distributed without a global data view. Tests with a seismic simulation code developed in the ExaHyPE engine uncover the method's potential. We found speed-ups of up to 2-3 for ill-balanced scenarios without logical modifications of the code base and show that the strategy is capable to react quickly to temporarily changing workload or node performance. K E Y W O R D S adaptive mesh refinement, MPI+X, reactive load balancing, task-based parallelism 1 INTRODUCTION Load balancing that decomposes work prior to a certain compute phase-a time step or iteration of an equation system solver-is doomed to underperform in many sophisticated simulation codes. There are multiple reasons for this: The clock frequency of processors changes over runtime, 1-3 the network speed is subject to noise due to other applications 4,5 or IO, and task-based multicore parallelization (MPI+X) tends to yield fluttering throughput due to effects of the memory hierarchy, 6 work stealing and nondeterminism in the MPI progression. While this list is not comprehensive, notably modern numerics drive the nonpredictability: They build atop of dynamic adaptive mesh refinement (AMR) that changes the mesh throughout a time step or mesh sweep, 7 combine different physical models, 7-9 or solve nonlinear equation systems with iterative solvers in substeps. 10 It becomes hard or even impossible to predict a step's computational load. As adjusting parallel partitions and respective data migration is often costly, many AMR codes consequently repartition only every 10th or 100th time step and tolerate certain load imbalances in-between. We propose a novel, lightweight load redistribution scheme that acts on top of traditional load balancing. It, first, assumes that parts of the underlying simulation code are phrased in terms of many expensive tasks. It, second, assumes that good AMR codes manage to hide data exchange behind computations yet cannot keep all cores busy all the time. In every solver step, some cores on some ranks have to wait for MPI data to drop in. Our idea is to offload tasks from overbooked to waiting ranks to make these work productively rather than being idle. 11 The code plugs into the Thi...