3D object classification is an important task in computer vision. In order to explore the high-order and multimodal correlations among 3D data, we propose an adaptive multi-hypergraph convolutional networks (AMHCN) framework to enhance 3D object classification performance. The proposed network improves the current hypergraph neural networks in two aspects. Firstly, existing networks rely on hyperedge constrained neighborhoods for feature aggregation, which may introduce noise or ignore positive information outside the hyperedges. To this end, we develop the partially absorbing random walks (PARW) to hypergraph for capturing optimal vertex neighborhoods from hypergraph globally. Then, based on the PARW on hypergraph, we design a new hypergraph convolution operator to learn deep embeddings from the optimized high-order correlation, which enables effective information propagation among the most relevant vertices. Secondly, concerning the multi-modal representations in practice, the current multi-modal hypergraph learning models either treat all modalities equally or introduce abundant parameters to learn weights of different modalities. To overcome these shortcomings, we propose a simple but effective dynamic weighting strategy for combining multi-modal representations, in which the importance of each modality can be adjusted adaptively by the loss function. We apply the proposed model to 3D object classification, and the experimental results on two 3D benchmark datasets demonstrate that our method outperforms the state-of-the-art methods, testifying to the effectiveness of both our convolution method and multi-modality fusion strategy.