Previous research focuses on approaches of deep reinforcement learning (DRL) to optimize diverse types of the single-objective dynamic flexible job shop scheduling problem (DFJSP), e.g., energy consumption, earliness and tardiness penalty and machine utilization rate, which gain many improvements in terms of objective metrics in comparison with metaheuristic algorithms such as GA (genetic algorithm) and dispatching rules such as MRT (most remaining time first). However, single-objective optimization in the job shop floor cannot satisfy the requirements of modern smart manufacturing systems, and the multiple-objective DFJSP has become mainstream and the core of intelligent workshops. A complex production environment in a real-world factory causes scheduling entities to have sophisticated characteristics, e.g., a job’s non-uniform processing time, uncertainty of the operation number and restraint of the due time, avoidance of the single machine’s prolonged slack time as well as overweight load, which make a method of the combination of dispatching rules in DRL brought up to adapt to the manufacturing environment at different rescheduling points and accumulate maximum rewards for a global optimum. In our work, we apply the structure of a dual layer DDQN (DLDDQN) to solve the DFJSP in real time with new job arrivals, and two objectives are optimized simultaneously, i.e., the minimization of the delay time sum and makespan. The framework includes two layers (agents): the higher one is named as a goal selector, which utilizes DDQN as a function approximator for selecting one reward form from six proposed ones that embody the two optimization objectives, while the lower one, called an actuator, utilizes DDQN to decide on an optimal rule that has a maximum Q value. The generated benchmark instances trained in our framework converged perfectly, and the comparative experiments validated the superiority and generality of the proposed DLDDQN.