With the widespread adoption of virtual reality and 360-degree video, there is a pressing need for objective metrics to assess quality in this immersive panoramic format reliably. However, existing image quality assessment models developed for traditional fixed-viewpoint content do not fully consider the specific perceptual issues involved in 360-degree viewing. This paper proposes a 360-degree image full-reference quality assessment (FR-IQA) methodology based on a multi-channel architecture. The proposed 360-degree FR-IQA method further optimizes and identifies the distorted image quality using two easily obtained useful saliency and depth-aware image features. The convolutional neural network (CNN) is designed for training. Furthermore, the proposed method accounts for predicting user viewing behaviors within 360-degree images, which will further benefit the multi-channel CNN architecture and enable the weighted average pooling of the predicted FR-IQA scores. The performance is evaluated on publicly available databases to demonstrate the advantages brought by the proposed multi-channel model in performance evaluation and cross-database evaluation experiments, where it outperforms other state-of-the-art ones. Moreover, an ablation study exhibits good generalization ability and robustness.