2018
DOI: 10.1109/tciaig.2017.2657690
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Extract Action Descriptions From Narrative Text

Abstract: The version in the Kent Academic Repository may differ from the final published version. Users are advised to check http://kar.kent.ac.uk for the status of the paper. Users should always cite the published version of record.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…Consideration of the medical dataset along with domain specific study is carried out by Lu et al [16]. The work of Ludwig et al [17] have presented a mechanism to extract specific action from a given textual contents using Bayesian network. Park et al [18] have presented a visual analytical system where a bipolar concept has been introduced in modeling.…”
Section: A Backgroundmentioning
confidence: 99%
“…Consideration of the medical dataset along with domain specific study is carried out by Lu et al [16]. The work of Ludwig et al [17] have presented a mechanism to extract specific action from a given textual contents using Bayesian network. Park et al [18] have presented a visual analytical system where a bipolar concept has been introduced in modeling.…”
Section: A Backgroundmentioning
confidence: 99%
“…The word embeddings do not contribute much to the problem of porting models trained in one subject domain to another subject domain. Our own research during the MUSE project on machine understanding of language in the context of interactive storytelling showed that the word embeddings used as features in a semantic role labeling task only could slightly improve the results, or needed extra resources that are manually built to largely improve the performance [22,39]. Third and most importantly for the theme of this paper, current word embeddings and language models are successful representations in simple language processing tasks.…”
Section: Current Proposed Solutions: Representation Learning and Deepmentioning
confidence: 99%
“…Sequence to sequence (seq2seq) modeling is being successfully applied to neural machine translation [6], [7], since this model is language-independent and able to implicitly learn semantic [8], syntactic and contextual dependencies [9]. Further advances in end-to-end training with this model has made it possible to build successful systems [10] for different natural language tasks, including parsing [11], image captioning [12], and open domain dialogue generation [5].…”
Section: State-of-the-artmentioning
confidence: 99%