Artificial intelligence technology has dramatically improved the quality of services for human needs, one of which is technology to improve the quality of services for the blind and visually impaired, particularly technology that can help them understand visual sights to facilitate navigation in their daily lives. This study developed an image captioning model to aid the blind and visually impaired in outdoor navigation. The image captioning model employs the encoder-decoder method, with the convolutional neural network (CNN) feature extraction and attention layer as encoders and the long short-term memory (LSTM) as decoders. ResNet101 and ResNet152 are used in the encoder to extract image features. The results of the extraction and caption are forwarded to the attention layer and the LSTM network. The attention layer uses the Bahdanau attention mechanism. The accuracy of the model is calculated using the bilingual evaluation understudy score (BLEU), metric for evaluation of translation with explicit ordering (METEOR) and recall-oriented understudy for gisting evaluation-longest common subsequence (ROUGE-L). ResNet101 performed the best on BLEU-4, scoring 91.811% and 94.0337% in the METEOR evaluation. The captioning results show that the model is quite successful in displaying a simple caption that is suitable for each image.