2017 International Conference on Advanced Computer Science and Information Systems (ICACSIS) 2017
DOI: 10.1109/icacsis.2017.8355048
|View full text |Cite
|
Sign up to set email alerts
|

Assessing data veracity through domain specific knowledge base inspection

Abstract: The Internet is nowadays a fantastic source of information thanks to the quantity of the information it provides and its dynamicity. However, these features also represent challenges when we want to consider trustworthy information only. On the Internet, the process of verifying information, known as factchecking, cannot be performed by human experts given the scale of the information that should be manually checked, and the speed to which it changes. In this paper, we propose an approach to evaluate the trust… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…Validation of information is another process that benefits from knowledge bases, especially in a larger scale. Olivieri et al (2017) proposed an approach to evaluate the trustworthiness of online information, modeling such information as RDF triples, matching its properties to a specific ontology (WordNet, in their case) and to Wikidata, obtaining feature vectors that can be used in a machine-learning pipeline to predict the veracity of a predicate.…”
Section: Resultsmentioning
confidence: 99%
“…Validation of information is another process that benefits from knowledge bases, especially in a larger scale. Olivieri et al (2017) proposed an approach to evaluate the trustworthiness of online information, modeling such information as RDF triples, matching its properties to a specific ontology (WordNet, in their case) and to Wikidata, obtaining feature vectors that can be used in a machine-learning pipeline to predict the veracity of a predicate.…”
Section: Resultsmentioning
confidence: 99%
“…In social media applications, data defined veracity in terms of data correctness/genuineness (Agarwal et al, 2016), (Ma et al, 2015), (Wu et al, 2015), (Kwon et al, 2017), (Singh et al, 2019), (Devi et al, 2020). Correspondingly, Oliveria and Giasemidis defined data veracity in terms of trustworthiness (Olivieri et al, 2017), (Giasemidis et al, 2016), and Paryani defined it as text ambiguity (Paryani et al, 2017). However, the focus was the identification of rumors and fake news.…”
Section: Definitionmentioning
confidence: 99%
“…This finding is due to the ignorance of the temporal and network features, which improved the classification process over time. Olivieri et al proposed the need to assess data veracity and used as a model the resource description framework (RDF) vector (Olivieri et al, 2017). A specified ontology is connected to the meaning of a word; each meaning is classified to help query additional evidence.…”
Section: Social Media Applicationsmentioning
confidence: 99%
“…These basic approaches are too simplistic because they rely on isolated n-grams; they are limited as they do not consider the context of the news and the words senses [15]. In [16] and [17] the authors showed that adding semantics to the data analysis improves the capability to properly classify information, but they also show that this methodology is hardly generalizable as it requires a deep knowledge of the domain of interest.…”
Section: Related Workmentioning
confidence: 99%