With the rapid development of cloud computing technology, although cloud computing can provide flexible resources and pay on demand mode, the problem of high energy consumption is still an urgent problem for cloud service providers to optimize. The cloud task scheduling problem of how to reduce operational costs while improving the quality of cloud services has become a research focus of cloud computing technology. In general, researchers will reduce power consumption, optimize load balancing,improve response speed and other aspects of cloud task scheduling algorithm for multi-objective optimization. At present, there are many optimization algorithms, such as traditional scheduling algorithms and heuristic algorithms, but these algorithms are unable to cope with the dynamic online complex cloud environment, resulting in the failure of multi-objective optimization or the high cost and poor effect of multi-objective optimization. However,deep reinforcement learning shows a better effect on multi-objective optimization of cloud task scheduling. Deep reinforcement learning offers new research ideas for cloud task scheduling. In this paper, based on the D3QN algorithm and noise network, NoisyD3QN algorithm is improved,which increases the exploration ability of reinforcement learning model and increases the upper limit of performance. Meanwhile, the NoisyD3QN algorithm-based cloud task scheduling model is constructed using the power consumption model proposed by FAN for simulation experiments. Comparing the performance of the eight algorithms on different optimization targets, the experiments show that the proposed algorithm effectively reduces the cluster power consumption, improves response speed, reduces user waiting time, and optimizes load balancing, Significantly reduces the average standard deviation of cluster CPU utilization.