We define and characterize the set of renegotiation-proof equilibria of coordination games with pre-play communication in which players have private preferences over the feasible coordinated outcomes. Renegotiation-proof equilibria provide a narrow selection from the large set of qualitatively diverse Bayesian Nash equilibria in such games. They are such that players never miscoordinate, play their jointly preferred outcome whenever there is one, and communicate only the ordinal part of their preferences. Moreover, they are robust to changes in players' beliefs, interim Pareto efficient, and evolutionarily stable.Pre-play communication After learning their type, but before playing this coordination game, the two players each simultaneously send a publicly observable message from a finite set of messagesis the set of all probability distributions over messages in M. We assume that messages are costless. We call the game, so amended, the coordination game with communication and denote it by Γ, M .Strategies A player's (ex-ante) strategy in the coordination game with communication is then a pair σ = (µ, ξ ), where µ : U → ∆(M) is a (Lebesgue measurable) message function that describes which (possibly random) message is sent for each possible realization of the agent's type, and ξ :M × M → U is an action function that describes the maximal type (cutoff type) that chooses L as a function of the observed message profile; that is, when an agent who follows strategy (µ, ξ ) observes a message profile (m, m ′ ) (message m sent by the agent, and message m ′ sent by the opponent), then the agent plays L if her type u is at most ξ (m, m ′ ) (i.e., if u ≤ ξ (m, m ′ )), and she plays R if u > ξ (m, m ′ ). (The choice that the threshold type plays L does not affect our analysis, given the assumption of F being atomless.)Let Σ be the set of all strategies in the game Γ, M .Remark 2. In principle, we should allow more general action functions ξ : U × M × M → △{L, R}, which specify the probability that an agent chooses L as a function of the observed message profile and the agent's type. It is simple to see, however, and proven in Lemma 1 in Appendix A.1, that any "generalized" strategy is dominated by a strategy that uses a cutoff action function in the second stage. The intuition, is that following the observation of any pair of messages, lower types always gain more (less) than higher types from choosing L (R). We thus simplify our notation by considering only cutoff action functions of the form ξ : M × M → U .Let µ u (m) denote the probability, given message function µ, that a player sends message m if she is of type u. Letμ (m) = IE u [µ u (m)] be the mean probability that a player of a random type sends message m (where the expectation is taken with respect to F). Let supp (μ) = {m ∈ M |μ (m) > 0} denote the support ofμ. With a slight abuse of notation we write ξ (m, m ′ ) = L when all types (who send message m with positive probability) play L (i.e., when ξ (m, m ′ ) ≥ sup (u ∈ U |µ u (m) > 0)), and we write ξ (m, m ′ ) = R...