2022
DOI: 10.1016/j.dcan.2022.04.034
|View full text |Cite
|
Sign up to set email alerts
|

FedCDR: Privacy-preserving federated cross-domain recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(9 citation statements)
references
References 15 publications
0
9
0
Order By: Relevance
“…For example, for the CV problem, He et al [49] proposed the Momentum Contrast method for unsupervised learning of visual representations, i.e., MoCo, which was a model that used a contrastive learning-based approach to self-supervise the training of the image representer (encoder) to better encode the image and apply it to downstream tasks. In contrastive learning for language modeling, Yan et al [50] proposed the ConSERT framework that employed the contrastive learning approach to fine-tune BERT, thus solving the problem of data augmentation methods to modify semantic information in the natural language model. In the realm of RS, Xie et al [18] introduced the Contrastive Learning for Sequence Recommendation (CL4SRec) approach, which was a model that concentrated on deriving self-supervised signals from raw users' behavior sequences using three different data augmentation methods to improve personalized recommendation performance.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…For example, for the CV problem, He et al [49] proposed the Momentum Contrast method for unsupervised learning of visual representations, i.e., MoCo, which was a model that used a contrastive learning-based approach to self-supervise the training of the image representer (encoder) to better encode the image and apply it to downstream tasks. In contrastive learning for language modeling, Yan et al [50] proposed the ConSERT framework that employed the contrastive learning approach to fine-tune BERT, thus solving the problem of data augmentation methods to modify semantic information in the natural language model. In the realm of RS, Xie et al [18] introduced the Contrastive Learning for Sequence Recommendation (CL4SRec) approach, which was a model that concentrated on deriving self-supervised signals from raw users' behavior sequences using three different data augmentation methods to improve personalized recommendation performance.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…34 Furthermore, traditional anonymization approaches have been found to be vulnerable to GNN-based inference attacks. 35 To deal with the issue, the study 36 uses federated learning with local differential privacy to protect users' private information. In particular, a privacy-preserving embedding transformation mechanism is used to preserve user data.…”
Section: Privacy-preserving Graph Publishing (Ppgp)mentioning
confidence: 99%
“…in which t denotes the check-in time, z (0 ≤ z ≤ 1) denotes the forgetting coefficient, and t min and t max represent the earliest and latest check-in times, respectively. In formula (14), parameter z adjusts the rangeability of user interest. According to the average rangeability of users' interests, we set the value of z to 0.5.…”
Section: Dt Rust(umentioning
confidence: 99%
“…These entities can process, analyze and mine data to extract useful information, but also sell and/or share the collected data with third parties, using it maliciously. Some scholars realize the importance of privacy-preserving in POI recommendation and design a certain number of methods to protect users' information [13][14][15]. These methods adopt several strategies such as privacy parameter optimization, tuning the influence of disturbances, and controlling the modeling errors [16,17] to solve the dilemma between privacy protection effect and recommendation quality.…”
Section: Introductionmentioning
confidence: 99%