In this paper, we present a method for human action recognition from depth images and posture data using convolutional neural networks (CNN). Two input descriptors are used for action representation. The first input is a depth motion image (DMI) that accumulates consecutive depth images of a human action, whilst the second input is a proposed moving joints descriptor (MJD) which represents the motion of body joints over time. In order to maximize feature extraction for accurate action classification, three CNN channels are trained with different inputs. The first channel is trained with depth motion images, the second channel is trained with both depth motion images and moving joint descriptors together, and the third channel is trained with moving joint descriptors only. The action predictions from the three CNN channels are fused together for the final action classification. We propose several fusion score operations to maximize the score of the right action. The experiments show that the results of fusing the output of three channels are better than using one channel or fusing two channels only. Our proposed method was evaluated on three public datasets: MSRAction3D, UTD-MAHD, and MAD dataset. The testing results indicate that the proposed approach outperforms most of existing state of the art methods such as HON4D and Actionlet on MSRAction3D. Although MAD dataset contains a high number of actions (35 actions) compared to existing action RGB-D datasets, our work achieves 91.86% of accuracy.