We study the Maximum Weight Matching (MWM) problem for general graphs through the maxproduct Belief Propagation (BP) and related Linear Programming (LP). The BP approach provides distributed heuristics for finding the Maximum A Posteriori (MAP) assignment in a joint probability distribution represented by a Graphical Model (GM) and respective LPs can be considered as continuous relaxations of the discrete MAP problem. It was recently shown that a BP algorithm converges to the correct MAP/MWM assignment under a simple GM formulation of MWM as long as the corresponding LP relaxation is tight. First, under the motivation for forcing the tightness condition, we consider a new GM formulation of MWM, say C-GM, using non-intersecting odd-sized cycles in the graph: the new corresponding LP relaxation, say C-LP, becomes tight for more MWM instances. However, the tightness of C-LP now does not guarantee such convergence and correctness of the new BP on C-GM. To address the issue, we introduce a novel graph transformation applied to C-GM, which results in another GM formulation of MWM, and prove that the respective BP on it converges to the correct MAP/MWM assignment as long as C-LP is tight. Finally, we also show that C-LP always has half-integral solutions, which leads to an efficient BP-based MWM heuristic consisting of making sequential, "cutting plane", modifications to the underlying GM. Our experiments show that this BP-based cutting plane heuristic performs as well as that based on traditional LP solvers.
IntroductionGraphical Models (GMs) have been utilized for reasoning in a variety of scientific fields [1][2][3][4]. Such models use a graph structure to encode the joint probability distribution, where vertices correspond to random variables and edges specify conditional dependencies. An important inference task in many applications involving GMs is to find the most likely assignment to the variables in a GM -the maximum a posteriori (MAP) configuration. The max-product Belief Propagation (BP) is a popular approach for approximately solving the MAP inference problem. BP is an iterative, message-passing algorithm that is exact on tree structured GMs. However, BP often shows remarkably strong heuristic performance beyond trees, i.e., on GMs with loops. Distributed implementation, associated ease of programming and strong parallelization potential are the main reasons for the growing popularity of the BP algorithm, e.g., see [5,6] for recent discussions of BP's parallel implementations.The convergence and correctness of BP was recently established for a certain class of loopy GM formulations of several classical combinatorial optimization problems, including matchings [7-9], perfect matchings [10], shortest paths [11], independent sets [12] and network flows [13]. The important common feature of these instances is that BP converges in polynomial-time to a correct MAP assignment when the Linear Programming (LP) relaxation of the MAP inference problem is tight, i.e., when it shows * S. Ahn, S. Park and J. Shin are with ...