During a forensic interview, high-stakes deception is very prevalent notwithstanding the heavy consequences that may result. Studies have shown that most untrained people cannot perform well in discerning liars and truth-tellers.Thus it has become common to adopt various technical aids to compensate for this poor judgment. Examples are polygraphs, functional Magnetic Resonance Imaging (fMRI) and linguistic analysis. However, the deception indicators used in these cases are not reliable.In the popular TV program Lie to Me, micro-expressions have been used for detecting deceit during the investigation of some criminal cases. A microexpression is considered to be a rapid and involuntary facial expression which could reveal the concealed emotion. Additionally, some psychological studies have stated that certain facial actions are more difficult to inhibit if the associated facial expressions are genuine. Similarly, these facial expressions are equally difficult to fake. This has cast light on the possibility that deception could be detected by analyzing these facial actions. However, to the best knowledge of the author, there is no computer vision research that has attempted to discriminate high-stakes deception from truth using facial expressions. Therefore, this thesis aims to test the validity of facial clues to deception detection in high-stakes situations using computer vision approaches.We note that only a limited number of the existing databases have been collected specifically for deception detection studies and none of them were obtained from real-world situations. In this thesis we present a video database of actual highstakes situations, which we have created using YouTube.We have adopted 2D appearance-based methods as the methodology to characterize the 3D facial features. Instead of building a 3D head model as is the current trend, we have extracted invariant 2D features that are related to the 3D characteristic.iii In order to discern deception and honesty, we have identified the following deceptive cues from nine separate facial regions through dynamic facial analysis: eye blink, eyebrow motion, wrinkle occurrence and mouth motion. Then these cues were integrated to form a facial behavior pattern vector. A Random Forest was trained using the collected database and applied to classify the facial patterns into deceptive and truthful categories.Despite the many uncontrolled factors (illumination, head pose and facial occlusion) in the videos in our database, we have achieved an accuracy of 76.92% when discriminating liars from truth-tellers using both micro-expressions and