As a crucial component of many natural language processing tasks, extracting entities and relations transforms unstructured text information into structured data, providing essential support for constructing knowledge graphs (KGs). However, current entity relation extraction models often prioritize the extraction of richer semantic features or the optimization of relation extraction methods, overlooking the significance of positional information and subject characteristics in this task. To solve this problem, we introduce the subject position-based complex exponential embedding for entity relation extraction model (SPECE). The encoder module of this model ingeniously combines a randomly initialized dilated convolutional network with a BERT encoder. Notably, it determines the initial position of the predicted subject based on semantic cues. Furthermore, it achieves a harmonious integration of positional encoding features and textual features through the adoption of the complex exponential embedding method. The experimental outcomes on both the NYT and WebNLG datasets reveal that, when compared to other baseline models, our proposed SPECE model demonstrates significant improvements in the F1 score on both datasets. This further validates its efficacy and superiority.