2023
DOI: 10.3390/e25020361
|View full text |Cite
|
Sign up to set email alerts
|

Design and Application of Deep Hash Embedding Algorithm with Fusion Entity Attribute Information

Abstract: Hash is one of the most widely used methods for computing efficiency and storage efficiency. With the development of deep learning, the deep hash method shows more advantages than traditional methods. This paper proposes a method to convert entities with attribute information into embedded vectors (FPHD). The design uses the hash method to quickly extract entity features, and uses a deep neural network to learn the implicit association between entity features. This design solves two main problems in large-scal… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…Condition C2 requires the number of marginal sufficient variables to be at the order of o(N 1∕2 ), whose model size may not be very sparse. Condition C3 provides the bounds of quantile grid points and avoids technicality arising at the tail area, such as Tang et al [20] and Yuan et al [28] Condition C4 assumes that the minimum true signal-to-noise ratio, where the signal vanishes can converge to zero at the order of N −𝜚 1 as the sample size N goes to infinity. Such a condition is typical in the feature screening literature, such as Fan and Lv, [1] He et al, [9] Tang et al [20] We next present an important property of the proposed test statistic in Theorem 1.…”
Section: Asymptotic Property Of Marginal Qc-svs-fdrmentioning
confidence: 99%
“…Condition C2 requires the number of marginal sufficient variables to be at the order of o(N 1∕2 ), whose model size may not be very sparse. Condition C3 provides the bounds of quantile grid points and avoids technicality arising at the tail area, such as Tang et al [20] and Yuan et al [28] Condition C4 assumes that the minimum true signal-to-noise ratio, where the signal vanishes can converge to zero at the order of N −𝜚 1 as the sample size N goes to infinity. Such a condition is typical in the feature screening literature, such as Fan and Lv, [1] He et al, [9] Tang et al [20] We next present an important property of the proposed test statistic in Theorem 1.…”
Section: Asymptotic Property Of Marginal Qc-svs-fdrmentioning
confidence: 99%
“…In 2020, Wang et al [19] proposed the CKAN model by exploring the message-passing mechanism on knowledge graphs that exploit higher-order connectivity in an end-to-end manner. Meanwhile, in 2023, our team [20] proposed a deep hash-embedded recommendation based on a knowledge graph, and the model had very good results.…”
Section: Recommendation System Based On Knowledge Graphmentioning
confidence: 99%
“…where ŷFM (x) is the final prediction function of the FM algorithm idea. For each given feature vector denoted by x, x ∈ R n , ŷFM (x) can give a prediction score; w 0 ∈ R is a constant term, w i ∈ R n denotes the weight parameter of the deviation term (i.e., the firstorder eigencoefficient), and ŵij is the second-order eigencoefficient, which is defined as shown in Equation (20). FM learns a one-dimensional vector of size k for each feature, and the weight value of the combination of the features of two features x i and x j can be expressed by the inner product of the vectors v i and v j corresponding to the features.…”
Section: Factorization Machine (Fm) and Collaborative Filteringmentioning
confidence: 99%