Proceedings of the Brazilian Symposium on Multimedia and the Web 2021
DOI: 10.1145/3470482.3479634
|View full text |Cite
|
Sign up to set email alerts
|

Learning Textual Representations from Multiple Modalities to Detect Fake News Through One-Class Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 23 publications
0
8
0
Order By: Relevance
“…One-Class Learning train with only one class and predict examples as belonging to the interest class or not [Emmert-Streib and Dehmer 2022]. We can define OCL as [Gôlo et al 2021]:…”
Section: One-class Learningmentioning
confidence: 99%
“…One-Class Learning train with only one class and predict examples as belonging to the interest class or not [Emmert-Streib and Dehmer 2022]. We can define OCL as [Gôlo et al 2021]:…”
Section: One-class Learningmentioning
confidence: 99%
“…OCL uses only examples from one class (interest class) to learn, i.e., the learning is in the absence of counterexamples [Tax 2001]. OCL will be able to identify whether an instance belongs to the interest class, reducing the labeling effort and being more appropriate for open-domain applications or applications in which the user is interested in one class [Gôlo et al 2021a, Gôlo et al 2021b].…”
Section: Introductionmentioning
confidence: 99%
“…Second, The study applies MVAE in three real scenarios. First, the study detects fake news through OCL with the MVAE representations [Gôlo et al 2021a]. The MVAE learns a new representation from the combination of promising modalities: text embeddings, linguistic features, and topic/density information.…”
Section: Introductionmentioning
confidence: 99%
“…OCL uses only instances from one class to learn [22]. OCL will be able to identify whether an instance belongs to the interest class, reducing the labeling effort and being more appropriate for open-domain applications or one-class applications [6,10].…”
Section: Introductionmentioning
confidence: 99%
“…Even so, studies use the traditional Bag-of-Words (BoW) technique [12,16]. Other studies explore dimensionality reduction techniques [8,14]. Finally, studies used language models via neural networks [17,19].…”
Section: Introductionmentioning
confidence: 99%