Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2014
DOI: 10.3115/v1/d14-1114
|View full text |Cite
|
Sign up to set email alerts
|

Tailor knowledge graph for query understanding: linking intent topics by propagation

Abstract: Knowledge graphs are recently used for enriching query representations in an entity-aware way for the rich facts organized around entities in it.However, few of the methods pay attention to non-entity words and clicked websites in queries, which also help conveying user intent. In this paper, we tackle the problem of intent understanding with innovatively representing entity words, refiners and clicked urls as intent topics in a unified knowledge graph based framework, in a way to exploit and expand knowledge … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
1
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(15 citation statements)
references
References 26 publications
0
1
0
Order By: Relevance
“…While some knowledge graph embedding methods utilize images of entities (Xie et al, 2017;Liu et al, 2019), some recently proposed multimodal methods consider both images and text descriptions of entities (Pezeshkpour et al, 2018;Wang et al, 2019). For example, MoSE (Zhao et al, 2022) and IMF (Li et al, 2023) learn modalityspecific representations and make predictions using the representations from different modalities. Also, OTKGE (Cao et al, 2022) proposes an optimal transport to align multi-modal embeddings, while MKGformer (Chen et al, 2022) conducts multi-level fusion using a hybrid transformer.…”
Section: Multimodal Knowledge Graph Completionmentioning
confidence: 99%
See 3 more Smart Citations
“…While some knowledge graph embedding methods utilize images of entities (Xie et al, 2017;Liu et al, 2019), some recently proposed multimodal methods consider both images and text descriptions of entities (Pezeshkpour et al, 2018;Wang et al, 2019). For example, MoSE (Zhao et al, 2022) and IMF (Li et al, 2023) learn modalityspecific representations and make predictions using the representations from different modalities. Also, OTKGE (Cao et al, 2022) proposes an optimal transport to align multi-modal embeddings, while MKGformer (Chen et al, 2022) conducts multi-level fusion using a hybrid transformer.…”
Section: Multimodal Knowledge Graph Completionmentioning
confidence: 99%
“…We use four datasets shown in Table 2; VTKG-I and VTKG-C are real-world VTKG datasets introduced in Section 3.2. While WN18 (Bordes et al, 2013) and FB15K237 (Toutanova and Chen, 2015) are benchmark datasets used in other multimodal knowledge graph completion research (Zhao et al, 2022;Chen et al, 2022), WN18 has a test leakage issue, and WN18RR (Dettmers et al, 2018) has been proposed to resolve the issue. In our experiments, we use WN18RR++ which is the fixed version of WN18RR as described in Section 3.2.…”
Section: Datasets and Experimental Setupmentioning
confidence: 99%
See 2 more Smart Citations
“…search log) or external sources (e.g. Knowledge Graph) has been leveraged to address this challenge [38]. Second, the majority of existing solutions [2,5] often train models from scratch in a supervised way, which require a large number of taskspecific labeled data.…”
Section: Introductionmentioning
confidence: 99%