<p>In distributed optimization schemes consisting of a group of agents connected to a central coordinator, the optimization algorithm often involves the agents solving private local proximal minimization sub-problems and exchanging data frequently with the coordinator to solve the global distributed problem. Such schemes usually cause excessive communication costs to the system, leading to the need for communication reduction for scenarios where communication is costly. The integration of Gaussian Processes (GPs) as a learning component to the Alternating Direction Method of Multipliers (ADMM hase been proven successful in learning each agent’s local proximal operators. Consequently, such a learning technique is effective to reduce the communication between the agents and the coordinator. A key element for the integration of GP in the ADMM algorithm is the mechanism upon which the coordinator decides when communication with an agent is required. The decision strategy used in this setting strongly affects the overall communication expenditure reduction and the ADMM’s algorithm performance. For that reason, we construct a general framework presenting a systematic querying mechanism as an optimization problem that balances the ideas ofmaximizinge the communication cost reduction while minimizing the prediction error. Motivated by such a framework, we propose different query strategies using the uncertainty measurement given by GP, to determine if the prediction is reliable enough to skip a communication round. We propose a joint query strategy following a simplification of the general framework that minimizesana L1 norm communication cost constraint by the trace of the joint uncertainty of the ADMM variables which is calculated using all agents’ prediction uncertainty. Additionally, we derive three different decision mechanisms (motivated by a confidence interval analysis) that make their decision by analyzing an uncertainty measurement for each agent individually. We integrate multiple measures to quantify the trade-off between the communication cost reduction and the optimization solution’s accuracy/optimality. The proposed methods can achieve significant communication reduction and good optimization solution accuracy for distributed optimization, as demonstrated by extensive simulations of a distributed sharing problem with quadratic cost functions for the agents. </p>