2011
DOI: 10.1007/978-3-642-20291-9_47
|View full text |Cite
|
Sign up to set email alerts
|

An Empirical Study of Massively Parallel Bayesian Networks Learning for Sentiment Extraction from Unstructured Text

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…As contextualized language models develop nowadays, we also try the BERT [11] 7 to generate the contextualized embeddings but it decreases the performance by 9.8%. It is not surprising because the context in each node is very limited 8 and the huge size of parameters (110M in BERT-BASE) for fine-tuning can easily cause an over-fitting problem.…”
Section: Ablation Studymentioning
confidence: 99%
See 1 more Smart Citation
“…As contextualized language models develop nowadays, we also try the BERT [11] 7 to generate the contextualized embeddings but it decreases the performance by 9.8%. It is not surprising because the context in each node is very limited 8 and the huge size of parameters (110M in BERT-BASE) for fine-tuning can easily cause an over-fitting problem.…”
Section: Ablation Studymentioning
confidence: 99%
“…We choose BERT without loss of generality. It can be replaced by its alternatives like ELMo[30] or XLNet[42] 8. On average, each variable node contains only 2-5 words in different verticals.…”
mentioning
confidence: 99%
“…[3] introduces a method for accelerating Bayesian network parameter learning using Hadoop and MapReduce. Other relevant work on parallelization of learning Bayesian network from data include [10], [6], [14], [4] and [7]. In this paper, we describe a parallel version of the PC algorithm for learning the structure of a Bayesian network from large data sets on a shared memory computer using threads.…”
Section: Introductionmentioning
confidence: 99%