2022
DOI: 10.1007/s41019-022-00179-3
|View full text |Cite
|
Sign up to set email alerts
|

Disentangled Graph Recurrent Network for Document Ranking

Abstract: BERT-based ranking models are emerging for its superior natural language understanding ability. All word relations and representations in the concatenation of query and document are modeled in the self-attention matrix as latent knowledge. However, some latent knowledge has none or negative effect on the relevance prediction between query and document. We model the observable and unobservable confounding factors in a causal graph and perform do-query to predict the relevance label given an intervention over th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…SGC [9] adopts straightforward low-pass filters to simplify GCN, which smooths the neighbor features using normalized adjacency matrices. GNN models, especially GCN and its variants, have been successfully applied to recommendation systems [41], [42], [43], social networks mining [44], [45], natural language processing [46], [47], [48], and biochemistry [49]. The most related line of research to our work is the FL over Graphs.…”
Section: Related Workmentioning
confidence: 99%
“…SGC [9] adopts straightforward low-pass filters to simplify GCN, which smooths the neighbor features using normalized adjacency matrices. GNN models, especially GCN and its variants, have been successfully applied to recommendation systems [41], [42], [43], social networks mining [44], [45], natural language processing [46], [47], [48], and biochemistry [49]. The most related line of research to our work is the FL over Graphs.…”
Section: Related Workmentioning
confidence: 99%
“…BERT [4], ERNIE [42] and RoBERTa [26], have dominated many natural language processing tasks, and have also achieved remarkable success on passage re-ranking. For example, PLM based re-rankers [5,6,23,28] have achieved state-of-the-art performance, which takes the concatenation of query-passage pair as input, and applies multi-layer full-attention to model their semantic relevance. Their superiority can be attributed to the expressive transformer structure and the pretrain-then-finetune paradigm, which allow the model to learn useful implicit knowledge (i.e., semantic relevance in the latent space) from massive textual corpus [8].…”
Section: Introductionmentioning
confidence: 99%