Multi-objective power scheduling (MOPS) aims to address the simultaneous minimization of economic costs and different types of environmental emissions during electricity generation. Recognizing it as an NP-hard problem, this article proposes a novel multi-agent deep reinforcement learning (MADRL)-based optimization algorithm. Within a custom multi-agent simulation environment, representing power-generating units as collaborative types of reinforcement learning (RL) agents, the MOPS problem is decomposed into sequential Markov decision processes (MDPs). The MDPs are then utilized for training an MADRL model, which subsequently offers the optimal solution to the optimization problem. The practical viability of the proposed method is evaluated across several experimental test systems consisting of up to 100 units featuring bi-objective and tri-objective problems. The results demonstrate that the proposed MADRL algorithm has better performance compared to established methods, such as teaching learning-based optimization (TLBO), real coded grey wolf optimization (RCGWO), evolutionary algorithm based on decomposition (EAD), non-dominated sorting algorithm II (NSGA-II), and non-dominated sorting algorithm III (NSGA-III).