Smart systems have been massively developed to help humans in various tasks. Deep Learning technologies push even further in creating accurate assistant systems due to the explosion of data lakes. One of the smart system tasks is disseminating ‘users needed information’, which is crucial in the tourism sector to promote local tourism destinations. In this research, we design a local tourism specific image captioning model, which will later support the development of AI-powered systems that assist various users. The model is developed using a visual Attention mechanism and uses the state-of-the-art feature extractor architecture EfficientNet. A local tourism dataset is collected and used in the research and two different captions: captions that describe the image literally and captions that represent human logical responses when seeing the image. The two kinds of captions make the captioning model more humane when implemented in the assistance system. We compared two different models using EfficientNet architectures (BO and B4) with other well-known VGG16 and InceptionV3. The best BLEU scores we get are 73.39 and 24.51 for the training set and the validation set, respectively, using EfficientNetB0. The captioning result using the developed model shows that the model can produce logical caption for local tourism-related images.