2019
DOI: 10.1109/access.2019.2954859
|View full text |Cite
|
Sign up to set email alerts
|

Design of Momentum Fractional Stochastic Gradient Descent for Recommender Systems

Abstract: The demand for recommender systems in E-commerce industry has increased tremendously. Efficient recommender systems are being proposed by different E-business companies with the intention to give users accurate and most relevant recommendation of products from huge amount of information. To improve the performance of recommender systems, various stochastic variants of gradient descent based algorithms have been reported. The scalability requirement of recommender systems needs algorithms with fast convergence … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 28 publications
(13 citation statements)
references
References 57 publications
0
13
0
Order By: Relevance
“…In addition, we use different optimizers (including Stochastic gradient descent method, SGD [34][35][36][37][38], momentum Stochastic gradient descent method [39][40][41][42], and adaptive moment estimation method, Adam [43][44]) to optimize the convolutional neural network constructed above. The SGD makes the loss function converge slowly, the convergence rate of the momentum SGD method in the early stage is similar to that of the Stochastic gradient descent method, but it can effectively converge the loss function in the later stage; The adaptive moment estimation method (Adam) can make the loss function converge quickly in the initial stage of neural network training, but it cannot converge further after the loss function converges to a certain extent in the later stage.…”
Section: Resultsmentioning
confidence: 99%
“…In addition, we use different optimizers (including Stochastic gradient descent method, SGD [34][35][36][37][38], momentum Stochastic gradient descent method [39][40][41][42], and adaptive moment estimation method, Adam [43][44]) to optimize the convolutional neural network constructed above. The SGD makes the loss function converge slowly, the convergence rate of the momentum SGD method in the early stage is similar to that of the Stochastic gradient descent method, but it can effectively converge the loss function in the later stage; The adaptive moment estimation method (Adam) can make the loss function converge quickly in the initial stage of neural network training, but it cannot converge further after the loss function converges to a certain extent in the later stage.…”
Section: Resultsmentioning
confidence: 99%
“…To improve the training ability of a neural network, this paper proposes a fractional-order gradient descent with the momentum method for training RBF neural networks. Compared with reference [12], the convergence of the proposed algorithm is proved in this paper. Compared with references [13,14], this paper uses both the momentum and gradient descent method.…”
Section: Introductionmentioning
confidence: 85%
“…The Adam algorithm [39] is different from traditional stochastic gradient descent [40], [41]. In stochastic gradient descent, a single learning rate is used to update all weights, and the learning rate does not change during the training process.…”
Section: Overall Optimization Of the Rbfnnmentioning
confidence: 99%