2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2017
DOI: 10.1109/cvprw.2017.214
|View full text |Cite
|
Sign up to set email alerts
|

Automated Screening of Job Candidate Based on Multimodal Video Processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(17 citation statements)
references
References 13 publications
0
14
0
Order By: Relevance
“…Although this method provides high accuracy due to tracking motion through landmark points, yet it suffers from high computation time at the pre‐processing stage. In a very similar study [12], k‐means is applied to action units to select five keyframes. Also, in [13], Hajarolasvadi and Demirel applied the k‐means clustering algorithm to acoustic features and then generated the corresponding spectrogram of the selected key audio segments.…”
Section: Related Researchmentioning
confidence: 99%
“…Although this method provides high accuracy due to tracking motion through landmark points, yet it suffers from high computation time at the pre‐processing stage. In a very similar study [12], k‐means is applied to action units to select five keyframes. Also, in [13], Hajarolasvadi and Demirel applied the k‐means clustering algorithm to acoustic features and then generated the corresponding spectrogram of the selected key audio segments.…”
Section: Related Researchmentioning
confidence: 99%
“…They use Hofstede's dimensions to create a simulated crowd from a cultural perspective. Gorbova and collaborators [21] present a system of automatic personality screening from video presentations in order to make a decision whether a person has to be invited to a job interview based on visual, audio and lexical cues. The work proposed by [13], presents a model to detect personality aspects based on the Big-five personality model using individuals behaviors automatically detected in video sequences.…”
Section: Related Workmentioning
confidence: 99%
“…For comparison, the table also reports system performance on an earlier published version of the system (Achmadnoer Sukma Wicaksana and Liem 2017) (which used a smaller feature set and did not yet optimize regression techniques). Furthermore, performance scores are reported for two other proposed solutions: the work in Gorbova et al (2017), employing similar features to ours, but with a multi-layered perceptron as statistical model; and the work in Kaya et al (2017), which obtained the highest accuracies of all participants in the quantitative Challenge. This latter work employed several state-of-the-art feature sets, some of which resulting from representations learned using deep neural networks, with considerably higher dimensionality than our features (thousands of feature dimensions).…”
Section: Quantitative Performancementioning
confidence: 99%
“…Comparison of quantitative performance (accuracy) between the system described as use case in this chapter (Achmadnoer Sukma Wicaksana 2017), an earlier version of the system presented at the ChaLearn workshop (Achmadnoer Sukma Wicaksana and Liem 2017), and two other proposed solutions for the ChaLearn Job Candidate Screening ChallengeCategoriesUse case system Earlier versionGorbova et al (2017) Kaya et al…”
mentioning
confidence: 99%