This article studies the distributed nonsmooth convex optimization problems for second-order multiagent systems. The objective function is the summation of local cost functions which are convex but nonsmooth. Each agent only knows its local cost function, local constraint set, and neighbor information. By virtue of proximal operator and Lagrangian methods, novel continuous-time distributed proximal-gradient algorithms with derivative feedback are proposed to solve the nonsmooth convex optimization for the consensus and resource allocation of multiagent systems, respectively. With the proposed algorithms, both the consensus and resource allocation problems are solved. Moreover, the system can converge to the optimal solution. Finally, simulation examples are given to illustrate the effectiveness of the proposed algorithms.