The World Wide Web Conference 2019
DOI: 10.1145/3308558.3313468
|View full text |Cite
|
Sign up to set email alerts
|

Neural IR Meets Graph Embedding: A Ranking Model for Product Search

Abstract: Recently, neural models for information retrieval are becoming increasingly popular. They provide effective approaches for product search due to their competitive advantages in semantic matching. However, it is challenging to use graph-based features, though proved very useful in IR literature, in these neural approaches. In this paper, we leverage the recent advances in graph embedding techniques to enable neural retrieval models to exploit graphstructured data for automatic feature extraction. The proposed a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
27
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 46 publications
(28 citation statements)
references
References 38 publications
(60 reference statements)
1
27
0
Order By: Relevance
“…To study the effects of IPS off-policy correction, we imputed a global impression bias by counting item frequency in the training set. Figure 4(a) shows that the NDCG with our meta-data model is comparable with SOTA from Zhang et al [35] (green versus dotdash blue line at the top). Using cold-start models hurt performance because there is no significant distribution shift between the train and test splits, i.e., total-variation loss is lower between train/test (15%) than two random folds splitting the test set (18%).…”
Section: Cold Start With Item Metadata and Ipssupporting
confidence: 70%
See 1 more Smart Citation
“…To study the effects of IPS off-policy correction, we imputed a global impression bias by counting item frequency in the training set. Figure 4(a) shows that the NDCG with our meta-data model is comparable with SOTA from Zhang et al [35] (green versus dotdash blue line at the top). Using cold-start models hurt performance because there is no significant distribution shift between the train and test splits, i.e., total-variation loss is lower between train/test (15%) than two random folds splitting the test set (18%).…”
Section: Cold Start With Item Metadata and Ipssupporting
confidence: 70%
“…7 To measure meta data performance, we use bag-of-words (BoW) features and a product search dataset. Following Zhang et al [35], we consider 37.7k train queries and 16k test queries as users, and all 184k unique products as candidates. Query/product features come from their bag-of-words (BoW) representation, where we used only the top 3000 frequent words and performed a square-root transform on the BoW features.…”
Section: Cold Start With Item Metadata and Ipsmentioning
confidence: 99%
“…Graph embedding techniques like DeepWalk [23] are effective for association analysis in graphical structures [3], in which low-dimensional representations of the nodes with neighboring and cooccurrence relations are learned. Zhang et al [29] propose a graph embeddingbased neural ranking framework to overcome the query-entity sparsity problem by integrating features in click-graph data. On heterogeneous information networks, recent studies for proximity search [18] learn graph embedding models to rank associative nodes by given semantic relations.…”
Section: Related Workmentioning
confidence: 99%
“…As discussed in [46], the mutual learning framework can help to find more robust local minimal by entropy regularization. Different from DML which jointly trains two neural architectures (eg., Resnet and MobileNet), our approach combines two types of models, i.e., embedding/neural models and path/graph models, with rather different inductive biases [45] to allow them to benefit more from each other.…”
Section: Model Trainingmentioning
confidence: 99%