Proceedings of the 2019 Conference of the North 2019
DOI: 10.18653/v1/n19-1175
|View full text |Cite
|
Sign up to set email alerts
|

Box of Lies: Multimodal Deception Detection in Dialogues

Abstract: Deception often takes place during everyday conversations, yet conversational dialogues remain largely unexplored by current work on automatic deception detection. In this paper, we address the task of detecting multimodal deceptive cues during conversational dialogues. We introduce a multimodal dataset containing deceptive conversations between participants playing The Tonight Show Starring Jimmy Fallon R Box of Lies game, in which they try to guess whether an object description provided by their opponent is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 34 publications
(30 citation statements)
references
References 16 publications
2
28
0
Order By: Relevance
“…For comparison, single models of Cabtoost, XGBoost and LightGBM have achieved results in terms of F-score of 84.1%, 84.6%, and 85.0% respectively. The achieved empirical results are highly competitive and comparable with the results presented by other researchers [11,12,17,18,21,20,23] (see Table 1). We present our results achieved with the use of two corpora and the approach described above.…”
Section: Discussion Of the Resultssupporting
confidence: 86%
See 2 more Smart Citations
“…For comparison, single models of Cabtoost, XGBoost and LightGBM have achieved results in terms of F-score of 84.1%, 84.6%, and 85.0% respectively. The achieved empirical results are highly competitive and comparable with the results presented by other researchers [11,12,17,18,21,20,23] (see Table 1). We present our results achieved with the use of two corpora and the approach described above.…”
Section: Discussion Of the Resultssupporting
confidence: 86%
“…Approach from [11] F-score = 63.9%, Precision = 76.1% Approach from [12] UAR = 74.9% Baseline system [17] UAR = 68.3% Approach from [18] Accuracy (max) = 75.0% Approach from [21] UAR (max) = 70.0% Approach from [20] UAR = 73.5%, F-score = 75.0%, Precision = 77.0% Approach from [23] Accuracy…”
Section: Approach Classification Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The use of several modalities for lie detection was also investigated to see its impact in improving detection algorithms. In [30][31][32], both verbal and non-verbal features were utilized. The verbal features were extracted from linguistic features in transcriptions, while non-verbal ones consisted of binary features containing information about facial and hands gestures.…”
Section: Related Workmentioning
confidence: 99%
“…The verbal features were extracted from linguistic features in transcriptions, while non-verbal ones consisted of binary features containing information about facial and hands gestures. In addition, Soldner et al [32] introduced dialogue features, consisting of interaction cues. Other multi-modal approaches combined the previously mention verbal and non-verbal features together with micro-expressions [3][4][5], thermal imaging [33], or spatio-temporal features extracted from 3D CNNs [34,35].…”
Section: Related Workmentioning
confidence: 99%