2016
DOI: 10.1007/978-3-319-46523-4_7
|View full text |Cite
|
Sign up to set email alerts
|

WebBrain: Joint Neural Learning of Large-Scale Commonsense Knowledge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 25 publications
0
5
0
Order By: Relevance
“…WebChild 2.0 has already been effective in providing background knowledge to applications such as visual question answering (Wang et al, 2016) and neural relation prediction (Chen et al, 2016). The WebChild 2.0 data is freely downloadable at http://www.mpi-inf.mpg.…”
Section: Discussionmentioning
confidence: 99%
“…WebChild 2.0 has already been effective in providing background knowledge to applications such as visual question answering (Wang et al, 2016) and neural relation prediction (Chen et al, 2016). The WebChild 2.0 data is freely downloadable at http://www.mpi-inf.mpg.…”
Section: Discussionmentioning
confidence: 99%
“…Sev-eral authors have tried to improve word embeddings by incorporating external knowledge bases. For example, some authors have proposed models which combine the loss function of a word embedding model, to ensure that word vectors are predictive of their context words, with the loss function of a knowledge graph embedding model, to encourage the word vectors to additionally be predictive of a given set of relational facts (Xu et al, 2014;Celikyilmaz et al, 2015;Chen et al, 2016). Other authors have used knowledge bases in a more restricted way, by taking the fact that two words are linked to each other in a given knowledge graph as evidence that their word vectors should be similar Speer et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…First, rather than relying on an external knowledge base, or other forms of supervision, as in e.g. (Chen et al, 2016), our method is completely unsupervised, as our only input consists of a text corpus. Second, whereas existing work has focused on methods for improving word embeddings, our aim is to learn vector representations that are complementary to standard word embeddings.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly, Chen et al [31] presented an approach for harvesting commonsense knowledge that relies on joint learning model from web-scale data. The model learn vector representations of commonsensical words and relations jointly using large-scale web information extractions and general corpus co-occurrences.…”
Section: Arxiv:180904708v2 [Csai] 27 Sep 2018mentioning
confidence: 99%