In recent years, deep learning-based algorithms such as CNN, LSTM, and autoencoders have been proposed to rank suspicious buggy files. Meanwhile,representational learning has served to be the best approach to extract rich semantic features of bug reports and source code to reduce their lexical mismatch. In this paper, we propose AttentiveBugLocator, a Siamese-based representational learning modelfor improved bug localization performance. AttentiveBugLocator employs BERT and code2vec embedding models to produce richer semantic representations and a Siamese BiLSTM network with context attention to learn semantic matching between BRs and SFs. To further improve the effectiveness of AttentiveBugLocator, the semantic matching features are carefully fused with VSM, stack trace, and code complexity features. Evaluation results on four data sets show that AttentiveBugLocator can identify buggy files on the scale of 56% and 62% on MAP and MRR – thus, outperforms several state-of-the-art approaches.