Securing passwords either written or spoken is considered one of the challenging authentication issues faced by individuals and organizations. Written passwords could be easily stolen by look over, or manbehind, while spoken passwords could be recorded and replayed by attackers. A proposed silent password which is based on a dual security model for lip movement analysis will be a promoting solution to these attacks. The goal of the current research is to propose a hybrid voting framework for silent passwords recognition using lip movement analysis. The proposed framework is built for Arabic language by extracting figures from predefined Arabic lexicon. The predefined lexicon mainly contains the Arabic figures from zero to nine with different shapes. The framework takes a video or sequence of images as an input, and outputs the corresponding silent password for the frames extracted from the input video. In this paper, three techniques will be employed to extract effective visual features from mouth-lip movement. Such techniques are SURF, HoG and Haar feature extractor. The resultant features in each technique are fed separately into a classification model, namely, the hidden Markov model (HMM). The HMM identifies corresponding Arabic figure from a predefined lexicon based on input features. The final classification models that are produced from the three techniques have been grouped in a voting scheme to produce the final classification result. The proposed model will be tested on handcrafted data set of lip movement, and it has shown a promising result with improved accuracy of Arabic figures recognition. INDEX TERMS Hidden Markov model, lip analysis, silent password, voting scheme.