<span>Many industries have made widespread use of the handwittern signature verification system, including banking, education, legal proceedings, and criminal investigation, in which verification and identification are absolutely necessary. In this research, we have developed an accurate offline signature verification model that can be used in a writer-independent scenario. First, the handwitten signature images went through four preprocessing stages in order to be suitable for finding the unique features. Then, three different types of features namely principal component analysis (PCA) as appearance-based features, gray-level co-occurrence matrix (GLCM) as texture-features, and fast Fourier transform (FFT) as frequency-features are extracted from signature images in order to build a hybrid feature vector for each image. Finally, to classify signature features, we have designed a proposed fast hyper deep neural network (FHDNN) architecture. Two different datasets are used to evaluate our model these are SigComp2011, and CEDAR datasets. The results collected demonstrate that the suggested model can operate with accuracy equal to 100%, outperforming several of its predecessors. In the terms of (precision, recall, and F-score) it gives a very good results for both datasets and exceeds (1.00, 0.487, and 0.655 respectively) on Sigcomp2011 dataset and (1.00, 0.507, and 0.672 respectively) on CEDAR dataset.</span>