“…The overall algorithmic flow of the network model is shown in Figure 5, including the selection of the load device, the division of data samples, the method of data pre-processing, the construction of the model, the training of the model, and the tuning of the model. To better train the multilayer neural network, the BP algorithm, i.e., the error back propagation algorithm, is used, the pre-processed data samples are passed through the neural network one by one to calculate the weighted sum of the neurons of each layer, and then the result of the activation calculation using the nonlinear activation function of each layer is used as the input of the neurons of the next layer, and this process is repeated at each network layer to obtain the forward propagation calculation result [30,31]. In this paper, we use the mean square loss function, i.e., MSE , calculate the difference between the computation result of the last layer of the network and the target load power value, and this obtained error value is passed in a backward propagation way to calculate the difference of the previous layers separately, using the adaptive moment estimation optimizer Adam to adjust the parameters of the neurons in each layer to make them optimal, which is a complete learning process [32].…”