2021
DOI: 10.1109/mmul.2020.3048044
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Political Deception Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…We kept Chinese and Mandarin apart because they were referred to as such. Furthermore, most studies on the aforementioned languages are devoted to vocal cues [ 38 , 63 , 68 , 70 , 75 , 80 – 82 , 88 , 97 , 105 , 111 , 114 , 115 ].…”
Section: Discussionmentioning
confidence: 99%
“…We kept Chinese and Mandarin apart because they were referred to as such. Furthermore, most studies on the aforementioned languages are devoted to vocal cues [ 38 , 63 , 68 , 70 , 75 , 80 – 82 , 88 , 97 , 105 , 111 , 114 , 115 ].…”
Section: Discussionmentioning
confidence: 99%
“…In addition, used classifier algorithm could play a big role on the outcome. This is evident by the research conducted by Kamboj et al [27] and Şen et al [31], where the low performance of the first one was rationalized by the authors to be due to acoustic features having inherently low discriminating power. Meanwhile, the second work had its highest performance when combining acoustic features with the facial features.…”
Section: Audio Based Deception Detectionmentioning
confidence: 95%
“…As previously discussed, attempting a multimodal approach can significantly improve the classification accuracy of DD; however, it isn't clear that it guarantees improvement to the classification performance. For instance, a recent study by Kamboj et al [27] has achieved an accuracy of 70% using a combination of lexical, acoustic and visual features, or in the case of Şen et al [31], 72% accuracy was achieved when combining all visual, acoustic and linguistic features as opposed to 84.18% with only visual and acoustic modals. This may be attributed to some modals having less discriminative power compared to others depending on the approach, and methodology.…”
Section: Audio Based Deception Detectionmentioning
confidence: 99%
“…They achieved an accuracy of up to 98%. Kamboj et al [12] attempted to create their own dataset by collecting videos from the internet of political gures and labeling them using PolitiFact. The target modals they were aiming for are linguistic, acoustic, and visual features.…”
Section: Related Workmentioning
confidence: 99%