Predicting the future trajectories of surrounding pedestrians is undoubtedly one of the most essential but challenging tasks for safe urban autonomous driving. Despite this importance, there has been limited research conducted on the egocentric view from easy-to-access vehicle-mounted cameras for autonomous driving applications. This paper presents a non-autoregressive transformer based trajectory prediction methodology for pedestrian on egocentric view. Furthermore, our proposed model predicts egomotion independent future trajectories for utilization in downstream tasks such as motion planning in autonomous vehicles. This approach differs from previous research as it focuses on predicting the future position of pedestrians based on the current observed image context, rather than their future positions in future observed images. The proposed model, referred to as the TransPred network in this paper, is composed of three main modules: vehicle motion compensation, non-autoregressive transformer, and conditional variational autoencoder(CVAE). The transformer structure is employed to effectively handle raw images and the historical trajectory of the target pedestrian, enabling the generation of advanced future predictions. Additionally, the CVAE module is utilized in the final part of the overall model to predict plausible multiple future trajectories. It contributes to generating diverse and realistic future trajectory predictions. The performance of our model has been evaluated on Nuscenes and In-house dataset obtained from our vehicle equipped with sensors. We achieves the state-of-the-art performance for prioritized trajectories on both datasets. Moreover, the effectiveness of the proposed ego-motion independent trajectories is demonstrated through risk assessment experiments.