2022
DOI: 10.1007/978-3-031-17120-8_32
|View full text |Cite
|
Sign up to set email alerts
|

LoCSGN: Logic-Contrast Semantic Graph Network for Machine Reading Comprehension

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 21 publications
0
0
0
Order By: Relevance
“…NatLog [30] Stanford RTE [31] LReasoner [17] L-Datt [32] MERIt [4] LogiGAN [33] DAGN [18] AdaLoGN [16] Logiformer [34] LoCSGN [35] The first category is the approaches from the pre-training perspective, based on heuristic rules to capture logical relations in large corpora, and design corresponding training tasks for these relations to secondary train the existing pre-trained language models, such as MERIt and LogiGAN [33]. MERIt [4] proposes to use rules based on a large amount of unlabeled textual data, modeled after the form of the logical inference MRC task, to construct data for self-supervised pre-training in contrast learning.…”
Section: Rule-based Pre-training Based Data Enhancement Gnn Basedmentioning
confidence: 99%
See 1 more Smart Citation
“…NatLog [30] Stanford RTE [31] LReasoner [17] L-Datt [32] MERIt [4] LogiGAN [33] DAGN [18] AdaLoGN [16] Logiformer [34] LoCSGN [35] The first category is the approaches from the pre-training perspective, based on heuristic rules to capture logical relations in large corpora, and design corresponding training tasks for these relations to secondary train the existing pre-trained language models, such as MERIt and LogiGAN [33]. MERIt [4] proposes to use rules based on a large amount of unlabeled textual data, modeled after the form of the logical inference MRC task, to construct data for self-supervised pre-training in contrast learning.…”
Section: Rule-based Pre-training Based Data Enhancement Gnn Basedmentioning
confidence: 99%
“…On the ReClor dataset and LogiQA dataset, using the same experimental setup, the DaGATN model was compared with baseline models and the other logical inference machine reading comprehension models on leaderboard, i.e., graph neural network-based DAGN [18], AdaLoGN [16], Logiformer [34], LoCSGN [35], the pre-training-based MERIt [4] and the data enhancement-based LReasoner [17]. For a fair comparison with our approach, we selected the experimental results of MERIt using deberta-v2-xlarge and the best results of LReasoner under non-ensemble conditions, and the experimental results are shown in Table 3.…”
Section: Comparative Experimentsmentioning
confidence: 99%