2023
DOI: 10.1016/j.asoc.2023.110331
|View full text |Cite
|
Sign up to set email alerts
|

Single-objective and multi-objective optimization for variance counterbalancing in stochastic learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 12 publications
0
1
0
Order By: Relevance
“…( 8), the error over time back propagation algorithm is applied to obtain the gradient value of each parameter in the model by the loss function, and the parameter value is updated with the gradient descent strategy. The gradient descent strategy chosen in this paper is Adam algorithm [36], and the parameter update formula is,…”
Section: ) Feature Fusion Processmentioning
confidence: 99%
“…( 8), the error over time back propagation algorithm is applied to obtain the gradient value of each parameter in the model by the loss function, and the parameter value is updated with the gradient descent strategy. The gradient descent strategy chosen in this paper is Adam algorithm [36], and the parameter update formula is,…”
Section: ) Feature Fusion Processmentioning
confidence: 99%