The unstructured property challenges the applications of deep learning methods to understand 3D point clouds. While the recent appearance of Transformer architecture provides an ideal solution to address the aforementioned problem. Therefore, based on Transformer's permutation invariance and capability of building long-range context relationship, in this paper we propose an end-to-end Representation Aggregation and Propagation Transformer (RAPFormer) architecture for 3D point cloud analysis. Specifically, two core components, termed Point-wise Double Transformer and Channel-wise Double Transformer modules, are well designed for explicitly capturing interdependencies among points and channels, respectively. And both models consist of two key operations: aggregation attention mapping the point-wise/channel-wise feature maps into a global space, and propagation attention diffusing the aggregated features back to the input points or channels. We illustrate the effectiveness and competitiveness via extensive quantitative and qualitative experiments on 3D shape classification and segmentation benchmark datasets.