Along with extensive and successive applications, emotion recognition based on electroencephalogram has attracted more and more researchers. However, how to acquire sufficient and high-quality real emotion electroencephalogram to train emotion recognition model has always been a bottleneck issue in the electroencephalogram-based emotion recognition research field. Because the subject's emotion is easy to be affected by many factors and subjects do not always evoke emotion well, it's very hard to determine whether the real emotion electroencephalogram appears or not in the experiments. On the contrary to electroencephalogram, facial expression obtained without any intentional control is easy to be recognized by computers, and is also one of the primary and reliable cues for understanding emotions. Inspired by such a common sense, we proposed an approach to building up ground truth electroencephalogram dataset with visual indication. Firstly, the relationship between facial expression and electroencephalogram is analyzed in details from the viewpoints of biophysics and correlation. Secondly, based on the analysis result that when the subject's facial expression is evoked, it should be accompanied by the emotional electroencephalogram corresponding to it, the method of building up the ground truth electroencephalogram dataset with visual indication is put forward along with the computer automatic implementation. Thirdly, we have built up a ground truth electroencephalogram dataset which covers 3 kinds of emotions (i.e. joy, sadness, and neutral) of the subjects who are undergraduate and graduate students from Minzu University of China. Lastly, the validation of the established dataset is tested by the comparative experiments between the long short term memory emotion recognition models trained with two different datasets respectively. One dataset contains both truth and false electroencephalograms and another one only contains the truth data.