2019
DOI: 10.1016/j.inffus.2018.07.009
|View full text |Cite
|
Sign up to set email alerts
|

Maximal fusion of facts on the web with credibility guarantee

Abstract: A maximal number of factual claims with credibility higher than the precision requirement are extracted from the Web. • The learning model is up to 20 times faster than traditional learning. • The proposed model extracts up to 6 times more highly credible factual claims than a typical information extraction process. • The proposed model requires less than 57% label information to extract the same number of highly credible factual claims. • The proposed model is robust to 20% noisy data with only 6% deviation.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 42 publications
0
12
0
Order By: Relevance
“…To model the different factors or the reliability of different sources, credibility analysis is a possible way . To reflect the credibility relations among the collected probability distribution, different types of credibility functions have been presented …”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To model the different factors or the reliability of different sources, credibility analysis is a possible way . To reflect the credibility relations among the collected probability distribution, different types of credibility functions have been presented …”
Section: Proposed Methodsmentioning
confidence: 99%
“…49 To reflect the credibility relations among the collected probability distribution, different types of credibility functions have been presented. 30,50 It is reasonable to calculate the degree of credibility with the use of the similarity of the probability distribution, determined by the degree of support.…”
Section: Calculate Credibilitymentioning
confidence: 99%
“…Network-level constraints were originally considered in [69,12], in which the establishment of semantic interoperability in large-scale P2P networks was studied. As discussed earlier, these ideas have been adopted for integrity constraints in the reconciliation process of database schemas in [11,13,70]. Yet, these approaches put forward a simplistic view on data models and, thus, integrity constraints.…”
Section: Related Workmentioning
confidence: 99%
“…Information quality is firstly proposed by Yager and Petry based on Gini entropy to measure the uncertainty for a probability distribution [32], [33]. It has been widely used in pattern classification [34], [35], decision making [36], [37] and so on [38]- [43], [43], [44]. Li et al propose a generalized expression for information quality in basic probability assignment in Dempster-Shafer evidence theory [45], which makes information quality have greater scope of application.…”
Section: Introductionmentioning
confidence: 99%