This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the C p−1 global continuity of the isogeometric solution, both the computational cost and the communication cost of a direct solver is of order O(log(N )p 2 ) for the one dimensional (1D) case, O(N p 2 ) for the two dimensional (2D) case, and O(N 4/3 p 2 ) for the three dimensional (3D) case, where N is the number of degrees of freedom and p is the polynomial order of the B-spline basis functions. The theoretical estimates are verified by numerical experiments performed with three parallel multi-frontal direct solvers: MUMPS, PaStiX and SuperLU, available through PETIGA toolkit built on top of PETSc. Numerical results confirm these theoretical estimates both in terms of p and N . For a given problem size, the strong efficiency rapidly decreases as the number of processors increases, becoming about 20 percent for 256 processors for a 3D example with Preprint submitted to Computer Methods in Applied Mechanics and EngineeringNovember 23, 2014 128 3 unknowns and linear B-splines with C 0 global continuity, and 15 percent for a 3D example with 64 3 unknowns and quartic B-splines with C 3 global continuity. At the same time, one cannot arbitrarily increase the problem size, since the memory required by higher order continuity spaces is large, quickly consuming all the available memory resources even in the parallel distributed memory version. Numerical results also suggest that the use of distributed parallel machines is highly beneficial when solving higher order continuity spaces, although the number of processors that one can efficiently employ is somehow limited.