“…Examples of such methods are multi-objective bucket elimination (MOBE) [94,93], also known as multi-objective variable elimination (MOVE, which is the more common in the planning and reinforcement learning communities), which solves a series of local sub-problems to eliminate all agents from a MOCoG in sequence, by finding local coverage sets as best responses to neighbouring agents. Other such methods include multi-objective Russian doll search [95], multi-objective (AND/OR) branch-andbound tree search [65,66,96] using mini-bucket heurtistics [94,65], Pareto local search [44], and multi-objective max-sum [22]. Many of these papers note that PCSs can grow very large very quickly, making finding exact PCSs infeasible.…”