2019 International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI) 2019
DOI: 10.1109/mlbdbi48998.2019.00040
|View full text |Cite
|
Sign up to set email alerts
|

Feature Fusion Recommendation Algorithm Based on Collaborative Filtering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 1 publication
0
4
0
Order By: Relevance
“…On the other hand, due to the different consumption behaviors and habits of the consumers in the different regions of a city, some information on the behaviors might get lost or be ignored, leading to false recommendations. In that light, Wang et al [ 85 ] proposed a feature fusion personalized recommendation algorithm based on collaborative filtering combined with dense feature data and sparse feature data. This algorithm focused on learning the characteristics of users in a specific region and the characteristics of some sparse users while it also learned the time period of ordering, thus mitigating the model and personalizing the recommendation of the item the consumer may buy.…”
Section: Resultsmentioning
confidence: 99%
“…On the other hand, due to the different consumption behaviors and habits of the consumers in the different regions of a city, some information on the behaviors might get lost or be ignored, leading to false recommendations. In that light, Wang et al [ 85 ] proposed a feature fusion personalized recommendation algorithm based on collaborative filtering combined with dense feature data and sparse feature data. This algorithm focused on learning the characteristics of users in a specific region and the characteristics of some sparse users while it also learned the time period of ordering, thus mitigating the model and personalizing the recommendation of the item the consumer may buy.…”
Section: Resultsmentioning
confidence: 99%
“…However, this two hyper-parameter cannot be updating constantly, thus it has extremely weak generalization ability to predict the error from missing rating [10]. According to this weakness, double learning has been proposed which utilized two separate matrix factorization structures and different parameters to predict imputation model and rating model respectively [7,11]. Moreover, during the training of one model, another model will also be updated simultaneously.…”
Section: Joint Learning and Data Fusionmentioning
confidence: 99%
“…Also, the EIB estimator has difficulty estimating the estimation error accurately, while the IPS estimator often suffers from the problem of large variance, i.e., when the estimated tendency is very small, it creates a very large value. Hence the appearance of the Doubly Robust estimator [5,7], which integrates estimated error and propensity in a doubly robust manner to obtain unbiased performance estimates and mitigate the effects of propensity variance. That is, as long as the estimated propensity is accurate, or the estimated error is accurate.…”
Section: Introductionmentioning
confidence: 99%
“…Density can be computed as d = #Ratings #Users * #Items . To ascertain that a single rating value range was used, in order for the results of the different datasets to be comparable, the ratings in each dataset were normalised in the range [1.0-5.0], using the standard min-max formula [42], which is used in many CF research works [43][44][45]. In order to quantify the rating prediction accuracy, the following two CF rating prediction error metrics were used [46,47]:…”
mentioning
confidence: 99%