Distributed optimization has emerged as a pivotal research area with widespread applications in machine learning and networked systems. While optimization algorithms efficiently collaborate to minimize a global objective function through information exchange among nodes, they are susceptible to privacy breaches, potentially leading to severe and unpredictable consequences. As differential privacy becomes the de facto standard for privacy preservation, recent studies have integrated differential privacy with distributed optimization. However, existing differential privacy distributed optimization algorithms typically require continuous communication and noise injection, which results in substantial communication overhead and degraded optimization performance. In this paper, we introduce a novel event‐triggered based differential privacy distributed optimization algorithm that leverages intermittent Laplace noise injection. Recognizing that event‐triggered timings may inadvertently reveal gradient information about the objective function, we introduce an enhanced ‐Differential Privacy definition specifically tailored for event‐triggered communication, accounting for both the transmitted state information and the event‐triggered timings. By intermittently injecting Laplace noise, our mechanism significantly reduces the total amount of noise required, thereby minimizing its negative impact on the algorithm's performance. We rigorously prove that, even with reduced noise, our algorithm ensures rigorous privacy preservation. Furthermore, our algorithm achieves accurate convergence and improved optimization performance in constrained optimization problems, while reducing communication overhead. Numerical simulations confirm the effectiveness of our proposed algorithm.