2019 IEEE International Conference on Big Data (Big Data) 2019
DOI: 10.1109/bigdata47090.2019.9005677
|View full text |Cite
|
Sign up to set email alerts
|

Deep Tensor Factorization for Multi-Criteria Recommender Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 20 publications
(9 citation statements)
references
References 25 publications
0
9
0
Order By: Relevance
“…Cross-domain meta-learner θ: MSN takes modulation vectors as input to learn a cross-domain multi-initialization metalearner for new multi-modal tasks. As lower layers in deep networks tend to produce general features while higher layers tend to produce specific features [11,12], the lower layers of MSN, named as common network θ c , are to learn domaininvariant knowledge while the higher layers of MSN, named as private networks θ d p for d ∈ {s, t} in Fig. 2, are to learn domain-specific knowledge.…”
Section: Meta Separation Network (Msn)mentioning
confidence: 99%
“…Cross-domain meta-learner θ: MSN takes modulation vectors as input to learn a cross-domain multi-initialization metalearner for new multi-modal tasks. As lower layers in deep networks tend to produce general features while higher layers tend to produce specific features [11,12], the lower layers of MSN, named as common network θ c , are to learn domaininvariant knowledge while the higher layers of MSN, named as private networks θ d p for d ∈ {s, t} in Fig. 2, are to learn domain-specific knowledge.…”
Section: Meta Separation Network (Msn)mentioning
confidence: 99%
“…Both users' content information and ontological semantic filtering are utilized to solve sparsity problem in [35]. Content features extracted by stacked denoising autoencoders are integrated into tensor factorization in [36]. Even though the work presented in [36] utilizes denoising autoencoders to extract features from content information, the work is still based on linear assumption during prediction process.…”
Section: Related Workmentioning
confidence: 99%
“…Content features extracted by stacked denoising autoencoders are integrated into tensor factorization in [36]. Even though the work presented in [36] utilizes denoising autoencoders to extract features from content information, the work is still based on linear assumption during prediction process. [37] propose to use preference-based similarity instead of computing correlations over sparse rating profiles to handle with sparsity issue.…”
Section: Related Workmentioning
confidence: 99%
“…The DTF can extract hierarchical and meaningful features of multi-channel images such as hyperspectral images, thus is popular in image classification and pattern classification [227,228,229]. DTF can also be used in recommender systems [230], scene decomposition [231], and fault diagnosis [232].…”
Section: Dealing With High Dimensional Datamentioning
confidence: 99%