Keyframe‐based motion synthesis holds significant effects in games and movies. Existing methods for complex motion synthesis often require secondary post‐processing to eliminate foot sliding to yield satisfied motions. In this paper, we analyze the cause of the sliding issue attributed to the mismatch between root trajectory and motion postures. To address the problem, we propose a novel end‐to‐end Spatial‐Temporal transformer network conditioned on foot contact information for high‐quality keyframe‐based motion synthesis. Specifically, our model mainly compromises a spatial‐temporal transformer encoder and two decoders to learn motion sequence features and predict motion postures and foot contact states. A novel constrained embedding, which consists of keyframes and foot contact constraints, is incorporated into the model to facilitate network learning from diversified control knowledge. To generate matched root trajectory with motion postures, we design a differentiable root trajectory reconstruction algorithm to construct root trajectory based on the decoder outputs. Qualitative and quantitative experiments on the public LaFAN1, Dance, and Martial Arts datasets demonstrate the superiority of our method in generating high‐quality complex motions compared with state‐of‐the‐arts.