Skeleton‐based action recognition has recently attracted a lot of research interests due to its advantage in computational efficiency. Some recent work building upon Graph Convolutional Networks (GCNs) has shown promising performance in this task by modelling intrinsic spatial correlations between skeleton joints. However, these methods only consider local properties of action sequences in the spatial‐temporal domain, and consequently, are limited in distinguishing complex actions with similar local movements. To address this problem, a novel tri‐attention module (TAM) is proposed to guide GCNs to perceive significant variations across local movements. Specifically, the devised TAM is implemented in three steps: i) A dimension permuting unit is proposed to characterise skeleton action sequences in three different domains: body poses, joint trajectories, and evolving projections. ii) A global statistical modelling unit is introduced to aggregate the first‐order and second‐order properties of global contexts to perceive the significant movement variations of each domain. iii) A fusion unit is presented to integrate the features of these three domains together and leverage as orientation for graph convolution at each layer. Through these three steps, significant‐variation frames, joints, and channels can be enhanced. We conduct extensive experiments on two large‐scale benchmark datasets, NTU RGB‐D and Kinetics‐Skeleton. Experimental results demonstrate that the proposed TAM can be easily plugged into existing GCNs and achieve comparable performance with the state‐of‐the‐art methods.