In this paper, a new theory to train neural networks is presented which is called "Neural Network Meaningful Learning" (NNMeL) theory. According to this theory, meaningful learning in an artificial neural network occurs when concepts are learned separately but get related to each other. This theory is theoretically supported by "Azobel's cognitive theory of learning". According to Azobel's theory, the most important effective factor in learning is previous learning, and meaningful learning occurs when a person consciously relates new knowledge to what they already knew. Also, a new model named "Deep Clustering based on Sentence Similarity" (DCSS) is proposed for topic detection. This model proposes to use similarity of sentences to produce sentence representation instead of using autoencoder and training it based on denoising. Also, a trainable framework based on NNMeL is presented. Many experiments conducted for evaluation. First of all, an experiment is arranged to check the correctness of the NNMeL theory. The results confirm that training ANN according to this theory (training concepts one after another), gets better results than training all concepts together. In the following experiments, the DCSS model, which is based on sentence similarity, is compared with another autoencoder based method not only by evaluation metrics but also by human evaluators. The results indicate a 6.6% improvement in accuracy and better human-evaluators satisfaction. Finally, the proposed model is compared with ten other methods in topic detection application. The results show that the proposed model is superior to the rest of the methods.