2022
DOI: 10.1007/978-981-16-5652-1_43
|View full text |Cite
|
Sign up to set email alerts
|

Media Bias Detection Using Sentimental Analysis and Clustering Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…The third assumption focused only on fake media, systemic bias and the possibility of abuses, given the context of profound uncertainties regarding AI (Borsci et al 2023;Feher and Veres 2022;Rawat and Vadivu 2022;Holton and Boyd 2021;Bell 2018). However, this particular survey showed more comprehensive dynamics of visions strongly determined by participants' personal beliefs and negotiation of uncertainties, which we modeled as a pair of glasses on a fulcrum -a Glasses Model of AI Trust.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The third assumption focused only on fake media, systemic bias and the possibility of abuses, given the context of profound uncertainties regarding AI (Borsci et al 2023;Feher and Veres 2022;Rawat and Vadivu 2022;Holton and Boyd 2021;Bell 2018). However, this particular survey showed more comprehensive dynamics of visions strongly determined by participants' personal beliefs and negotiation of uncertainties, which we modeled as a pair of glasses on a fulcrum -a Glasses Model of AI Trust.…”
Section: Discussionmentioning
confidence: 99%
“…In parallel, a key requirement is to control (and clean up) datasets, fight disinformation and foster truly diverse, inclusive and reliable content (Georgieva et al 2022). These transformations support benefits and trigger uncertainties, especially as the above-mentioned driver functions reveal challenges of socio-technical issues with inequalities (Holton and Boyd 2021), system biases (Rawat and Vadivu 2022), cultural-economic colonization (Bell 2018), data colonialism (Couldry, and Mejias 2019), data-driven surveillance with privacy issues (Fossa 2023) and political destabilization through fake campaigns (Borsci et al 2023). The interactivity and virality of personal assistants, virtual influencers, AI-produced content and art, or deepfakes leads to fundamental questions such as how information sources can be evaluated or how they will add value to an existing ICM process or service.…”
Section: Theoretical Frameworkmentioning
confidence: 99%