“…Mainly, in these approaches, multiple neural network components including RNNs, GCNs, CNNs, GANs, encoder-decoders pairs etc. were used to carry out the task of language generation and recommendation [6,10,11,12,13,14,21]. For example, in the work of Li et al [10], an RNN module is responsible for predicting the sentiment of user's utterance towards a given a preference (entity), and subsequently the outcomes of the RNN module are used as an input to the autonencoder-based module for computing the recommendation.…”