2022
DOI: 10.3390/s22218224
|View full text |Cite
|
Sign up to set email alerts
|

Modeling and Applying Implicit Dormant Features for Recommendation via Clustering and Deep Factorization

Abstract: E-commerce systems experience poor quality of performance when the number of records in the customer database increases due to the gradual growth of customers and products. Applying implicit hidden features into the recommender system (RS) plays an important role in enhancing its performance due to the original dataset’s sparseness. In particular, we can comprehend the relationship between products and customers by analyzing the hierarchically expressed hidden implicit features of them. Furthermore, the effect… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
3

Relationship

6
4

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 56 publications
0
6
0
Order By: Relevance
“…As described in earlier studies [ 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 ], recall is a false positive observation ratio in contrast. Our suggested model achieved a 98.3% accuracy rate and a 1.7% false detection rate.…”
Section: Resultsmentioning
confidence: 75%
“…As described in earlier studies [ 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 ], recall is a false positive observation ratio in contrast. Our suggested model achieved a 98.3% accuracy rate and a 1.7% false detection rate.…”
Section: Resultsmentioning
confidence: 75%
“…In contrast, recall is a false-positive observation ratio, as detailed in previous research [ 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 ]. The precision of our proposed model was 99.3%, and the false detection rate was 0.7%.…”
Section: Resultsmentioning
confidence: 99%
“…Overfitting was a major concern during training, and it affects nearly all deep learning models. We tried to reduce overfitting risk using data augmentation methods to increase the training data and applying feature selection techniques by choosing the best features and removing the useless/unnecessary features [ 60 , 61 , 62 , 63 , 64 ].…”
Section: Implementation and Resultsmentioning
confidence: 99%