Distributed multi-agent optimization is the core of many applications in distributed learning, control, estimation, etc. Most existing algorithms assume knowledge of first-order information of the objective and have been analyzed for convex problems. However, there are situations where the objective is nonconvex, and one can only evaluate the function values at finitely many points. In this paper we consider derivative-free distributed algorithms for nonconvex multi-agent optimization, based on recent progress in zero-order optimization. We develop two algorithms for different settings, provide detailed analysis of their convergence behavior, and compare them with existing centralized zero-order algorithms and gradient-based distribution algorithms. Recently there has been increasing interest in zero-order optimization, where one does not have access to the gradient of the objective. Such situations can occur, for example, when only black-box procedures are available for computing the values of the functional characteristics of the problem, or when resource limitations restrict the use of fast or automatic differentiation techniques. Many existing works on zero-order optimization are based on constructing gradient estimators using finitely many function evaluations. [26] proposed and analyzed a single-point gradient estimator, and [27] further studied the convergence rate of single-point zero-order algorithms for highly smooth objectives.[28] proposed two-point gradient estimators and showed that the convergence of the resulting algorithms are comparable with their first-order counterparts.[29] studied two-point gradient estimators in stochastic nonconvex zero-order optimization.[30] and [31] showed that for stochastic zero-order convex optimization with two-point gradient estimators, the optimal rate O( d/N ) is achievable where N denotes the number of function value queries. [32] proposed and analyzed a zero-order stochastic Frank-Wolfe algorithm.Some recent works have also started to combine zero-order and distributed methods.[33] proposed a distributed zero-order algorithm for stochastic nonconvex problems based on the method of multipliers.[34] proposed a zero-order ADMM algorithm for distributed online convex optimization. [35] proposed a distributed zero-order algorithm over random networks and established its convergence for strongly convex objectives.[36] considered distributed zero-order methods for constrained convex optimization. On the other hand, there are still many questions remain to be studied in distributed zero-order optimization, e.g., how zero-order and distributed methods affect the performance of each other and whether their fundamental structural properties could be kept by tuning the way of their combination. This paper aims at providing messages along this line: We propose and analyze two zero-order distributed algorithms for deterministic nonconvex optimization, and compare their convergence rates with their distributed first-order and centralized zero-order counterparts. The first alg...