In the computer vision applications such as security surveillance and robotics, pedestrian identification shows much attention in the last decade. This is usually achieved by human biometrics. Besides human biometrics, sometimes it is required to identify pedestrians at a distance. This could be accomplished based on a fact of different whole-body appearances. The real-time pedestrian identification is a challenging task due to several factors such as illumination effects, noise, change in viewpoint, and video resolution. The more recent, the deep neural network (DNN) shows a massive performance for various real-world applications. In this article, we present a real-time architecture for pedestrian identification using motion-controlled DNN. In the proposed architecture, the motion vectors are calculating using optical flow and then utilized in the next step, named features extraction. Two types of features, such as HOG and DNN, are computing. The pre-trained VGG19 CNN model is employing and trained through transfer learning. The deep learning features are extracted from two layers-fully connected layers 7 and 8. Also, we proposed a feature selection method named Bayesian modeling along with LSVM. The best selected features of both HOG and DNN are finally fused in one matrix for final identification. The multi-class support vector machine classifier is used for final identification. The videos are recording in the real-time environment for the experimental process and achieve an average accuracy of 98.62%. Overall, identification accuracy shows the effectiveness of the proposed approach.