2018
DOI: 10.48550/arxiv.1810.01375
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Knowledge Hunting Framework for Common Sense Reasoning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…A plethora of works in the literature focused on the development of systems for tackling the WSC. In this regard, Emami et al [15] developed a rule-based system that, by focusing on knowledge-hunting on the Web, was able to achieve better than 57% accuracy on the original WSC problem (WSC273). According to Kocijan et al [32], this was the first approach to achieve better than chance accuracy.…”
Section: Related Workmentioning
confidence: 99%
“…A plethora of works in the literature focused on the development of systems for tackling the WSC. In this regard, Emami et al [15] developed a rule-based system that, by focusing on knowledge-hunting on the Web, was able to achieve better than 57% accuracy on the original WSC problem (WSC273). According to Kocijan et al [32], this was the first approach to achieve better than chance accuracy.…”
Section: Related Workmentioning
confidence: 99%
“…Note that the results for (Ruan et al 2019) are fine-tuned on the whole WSCR dataset, including the training and test sets. Results for LM ensemble (Trinh and Le 2018) and Knowl-edge Hunter (Emami et al 2018) are taken from (Trichelair et al 2018). Results for "BERT large + MTP" is taken from (Kocijan et al 2019) as the baseline of applying BERT to the WSC task.…”
Section: Winograd Schema Challengementioning
confidence: 99%
“…Therefore, most of the pronoun resolution models that address hard pronoun resolution rely on little (Liu et al, 2019) or no training data, via unsupervised pre-training (Trinh and Le, 2018;Radford et al, 2019). Another approach involves using external knowledge bases (Emami et al, 2018;Fähndrich et al, 2018), however, the accuracy of these models still lags behind that of the aforementioned pre-trained models.…”
Section: Related Workmentioning
confidence: 99%