Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1216
|View full text |Cite
|
Sign up to set email alerts
|

Reasoning with Heterogeneous Knowledge for Commonsense Machine Comprehension

Abstract: Reasoning with commonsense knowledge is critical for natural language understanding.Traditional methods for commonsense machine comprehension mostly only focus on one specific kind of knowledge, neglecting the fact that commonsense reasoning requires simultaneously considering different kinds of commonsense knowledge. In this paper, we propose a multi-knowledge reasoning method, which can exploit heterogeneous knowledge for commonsense machine comprehension. Specifically, we first mine different kinds of knowl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(30 citation statements)
references
References 48 publications
0
28
0
Order By: Relevance
“…The corpus we used in this paper was first designed for Story Cloze Test (SCT) (Mostafazadeh et al 2016a), which requires to select a correct ending from two candidates given a story context. Feature-based (Chaturvedi, Peng, and Dan 2017;Lin, Sun, and Han 2017) or neural (Mostafazadeh et al 2016b;Wang, Liu, and Zhao 2017) classification models are proposed to measure the coherence between a candidate ending and a story context from various aspects such as event, sentiment, and topic. However, story ending generation (Li, Ding, and Liu 2018;Peng et al 2018) is more challenging in that the task requires to modeling context clues and implicit knowledge to produce reasonable endings.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The corpus we used in this paper was first designed for Story Cloze Test (SCT) (Mostafazadeh et al 2016a), which requires to select a correct ending from two candidates given a story context. Feature-based (Chaturvedi, Peng, and Dan 2017;Lin, Sun, and Han 2017) or neural (Mostafazadeh et al 2016b;Wang, Liu, and Zhao 2017) classification models are proposed to measure the coherence between a candidate ending and a story context from various aspects such as event, sentiment, and topic. However, story ending generation (Li, Ding, and Liu 2018;Peng et al 2018) is more challenging in that the task requires to modeling context clues and implicit knowledge to produce reasonable endings.…”
Section: Related Workmentioning
confidence: 99%
“…Similarly, introduced a new annotation framework to explain psychology of story characters with commonsense knowledge. And commonsense knowledge has also been shown useful to choose a correct story ending from two candidate endings (Lin, Sun, and Han 2017;Li et al 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Our work belongs to the second group. Lin et al [2017] learn the correlation between concepts with pointwise mutual information. We explore richer contexts from the rational knowledge graph with the graph-based neural network and empirically show that the approach performs better on question answering datasets.…”
Section: Related Workmentioning
confidence: 99%
“…Knowledge graphs have been applied in various natural language processing applications, such as reading comprehension (Lin et al, 2017;Yang and Mitchell, 2017) and machine translation (Zhang et al, 2017). ERNIE: Enhanced Representation through Knowledge Integration (Sun et al, 2019) appends knowledge to the input of the model and learns via knowledge masking, as well as entitylevel masking and phrase-level masking.…”
Section: Knowledge Integrationmentioning
confidence: 99%