2021
DOI: 10.1016/j.jag.2021.102597
|View full text |Cite
|
Sign up to set email alerts
|

SUACDNet: Attentional change detection network based on siamese U-shaped structure

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
10

Relationship

5
5

Authors

Journals

citations
Cited by 67 publications
(35 citation statements)
references
References 38 publications
0
35
0
Order By: Relevance
“…The overall algorithmic flow of the network model is shown in Figure 5, including the selection of the load device, the division of data samples, the method of data pre-processing, the construction of the model, the training of the model, and the tuning of the model. To better train the multilayer neural network, the BP algorithm, i.e., the error back propagation algorithm, is used, the pre-processed data samples are passed through the neural network one by one to calculate the weighted sum of the neurons of each layer, and then the result of the activation calculation using the nonlinear activation function of each layer is used as the input of the neurons of the next layer, and this process is repeated at each network layer to obtain the forward propagation calculation result [30,31]. In this paper, we use the mean square loss function, i.e., MSE , calculate the difference between the computation result of the last layer of the network and the target load power value, and this obtained error value is passed in a backward propagation way to calculate the difference of the previous layers separately, using the adaptive moment estimation optimizer Adam to adjust the parameters of the neurons in each layer to make them optimal, which is a complete learning process [32].…”
Section: Proposed Methodsmentioning
confidence: 99%
“…The overall algorithmic flow of the network model is shown in Figure 5, including the selection of the load device, the division of data samples, the method of data pre-processing, the construction of the model, the training of the model, and the tuning of the model. To better train the multilayer neural network, the BP algorithm, i.e., the error back propagation algorithm, is used, the pre-processed data samples are passed through the neural network one by one to calculate the weighted sum of the neurons of each layer, and then the result of the activation calculation using the nonlinear activation function of each layer is used as the input of the neurons of the next layer, and this process is repeated at each network layer to obtain the forward propagation calculation result [30,31]. In this paper, we use the mean square loss function, i.e., MSE , calculate the difference between the computation result of the last layer of the network and the target load power value, and this obtained error value is passed in a backward propagation way to calculate the difference of the previous layers separately, using the adaptive moment estimation optimizer Adam to adjust the parameters of the neurons in each layer to make them optimal, which is a complete learning process [32].…”
Section: Proposed Methodsmentioning
confidence: 99%
“…A CNN can well process two-dimensional grid data such as images [16][17][18][19][20], but most of people's daily life involves non-Euclideanspace data, so more and more people are engaged in research on GCNs [21][22][23][24][25][26]. There are two main methods used for GCNs: the spatial-based method and the spectral-based method.…”
Section: Graph Convolutional Networkmentioning
confidence: 99%
“…Therefore, the attention module is used to multiply the feature graph, combined with different scales to weigh the weight. The attention mechanism [27,28] has been shown to be helpful in numerous deep-learning-related tasks, such as image classification [29], image change detection [30], image segmentation [31,32]. This is an attention model that simulates the human brain.…”
Section: Deep Feature Extraction Modulementioning
confidence: 99%