Findings of the Association for Computational Linguistics: ACL 2022 2022
DOI: 10.18653/v1/2022.findings-acl.127
|View full text |Cite
|
Sign up to set email alerts
|

Logic-Driven Context Extension and Data Augmentation for Logical Reasoning of Text

Abstract: Logical reasoning of text requires identifying critical logical structures in the text and performing inference over them. Existing methods for logical reasoning mainly focus on contextual semantics of text while struggling to explicitly model the logical inference process. In this paper, we not only put forward a logic-driven context extension framework but also propose a logic-driven data augmentation algorithm. The former follows a three-step reasoning paradigm, and each step is respectively to extract logi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(9 citation statements)
references
References 37 publications
0
9
0
Order By: Relevance
“…works (Gao et al, 2021;Rajpurkar et al, 2016;Welbl et al, 2018a;Yang et al, 2018a;Huang et al, 2019a;Wang et al, 2021) that examine the ability of logical reasoning. LogiQA (Liu et al, 2020b) and ReClor (Yu et al, 2020) are sourced from examination in realistic scenario and examine a range of logical reasoning skills.…”
Section: Logical Reasoningmentioning
confidence: 99%
“…works (Gao et al, 2021;Rajpurkar et al, 2016;Welbl et al, 2018a;Yang et al, 2018a;Huang et al, 2019a;Wang et al, 2021) that examine the ability of logical reasoning. LogiQA (Liu et al, 2020b) and ReClor (Yu et al, 2020) are sourced from examination in realistic scenario and examine a range of logical reasoning skills.…”
Section: Logical Reasoningmentioning
confidence: 99%
“…Differently, Betz et al (2021) and Clark et al (2020) used synthetically generated datasets to prove that the Transformer (Vaswani et al, 2017) or pre-trained GPT-2 is able to perform complex reasoning, motivating following researchers to introduce symbolic rules into neural models. For example, Wang et al (2022) developed a context extension and data augmentation framework, which is based on the extracted logical expressions. Superior performance over its contenders can be observed on the ReClor dataset.…”
Section: Logical Reasoningmentioning
confidence: 99%
“…We evaluated our method on two challenging logical reasoning benchmarks, i.e., LogiQA and Re-Clor, with several strong baselines, including the pre-trained language models, DAGN , Focal Reasoner (Ouyang et al, 2021) and LReasoner (Wang et al, 2022). For more details, please refer to Appendix B.…”
Section: Dataset and Baselinementioning
confidence: 99%
See 2 more Smart Citations