In this work, we study a generic network cost minimization problem, in which every node has a local decision vector to determine. Each node incurs a cost depending on its decision vector and each link also incurs a cost depending on the decision vectors of its two end nodes. All nodes cooperate to minimize the overall network cost. The formulated network cost minimization problem has broad applications in distributed signal processing and control over multi-agent systems. To obtain a decentralized algorithm for the formulated problem, we resort to the distributed alternating direction method of multipliers (DADMM). However, each iteration of the DADMM involves solving a local optimization problem at each node, leading to intractable computational burden in many circumstances. As such, we propose a distributed linearized ADMM (DLADMM) algorithm for network cost minimization. In the DLADMM, each iteration only involves closed-form computations and avoids local optimization problems, which greatly reduces the computational complexity compared to the DADMM. We prove that the DLADMM converges to an optimal point when the local cost functions are convex and have Lipschitz continuous gradients. Linear convergence rate of the DLADMM is also established if the local cost functions are further strongly convex. Numerical experiments are conducted to corroborate the effectiveness of the DLADMM and we observe that the DLADMM has similar convergence performance as DADMM does while the former enjoys much lower computational overhead. The impact of network topology, connectivity and algorithm parameters are also investigated through simulations.