Text semantic similarity computation is a fundamental problem in the field of natural language processing. In recent years, text semantic similarity algorithms based on deep learning have become the mainstream research paradigm, but they suffer from the problem of insufficient understanding of text semantics and thus unclear interpretation of computational results. In this paper, we propose a text semantic encoding model combining frame semantic theory and deep learning, which enriches the semantic representation within sentences by making full use of the frame, frame elements, lexical units, and inter-frame relations in the frame semantic knowledge base and combining them with the Bert model, and interacts with the semantics between sentences through Siamese transformer encoders. The experiments are validated on MSRParaphraseCorpus and quora-question-pairs, and the results show that the model proposed in this paper outperforms similar models in terms of F1 values.