The existing deep learning compilers are unable to perform e cient hardware performance-related graph fusion when both time and power consumption are considered. The operator optimization cost is too high because of excessive fusion or skipping fusion. In addition, the compilers optimize the computational graph of Deep Neural Networks (DNN) by performing static graph transformation based on the greedy algorithm, only considering the runtime performance and ignoring the cost of the tuning process. To solve these problems, this paper proposes PCGC, a DNN computational graph optimization compiler. Through the performance feedback at runtime, PCGC designs a computational graph fusion and splitting optimization strategy based on multilevel operator layer fusion-splitting rules. First, PCGC uses a rule-guided graph segmentation algorithm to recursively segment the computational graph into smaller subgraph to achieve an e cient and detailed search. Then, PCGC uses the cost model to receive the feedback of hardware performance information, uses the cost model and operator fusion rules to guide the partial fusion and split of the nodes and edges of the computational graph, and exibly generates the optimal subgraph according to different hardware. Finally, in the fusion process, the operator computing attributes are considered. The graph-level node and operator-level cyclic fusion are closely combined to optimize the search space of partial fusion. Compared with other advanced accelerators, PCGC optimizes the overall power consumption on an embedded GPU by an average of 130.5% when the time consumption on each hardware is not lower than the average time consumption.On Domain Speci c Architecture (DSA), PCGC optimizes power consumption by an average of 66.5%. On FPGA, PCGC optimizes power consumption by 66.1%. In a sense, PCGC can achieve high-speed inference in speci c power supply scenarios, reducing the carbon emissions of edge computing.