A novel method for the classification and retrieval of 3D models is proposed; it exploits the 2D panoramic view representation of 3D models as input to an ensemble of Convolutional Neural Networks which automatically compute the features. The first step of the proposed pipeline, pose normalization is performed using the SYMPAN method, which is also computed on the panoramic view representation. In the training phase, three panoramic views corresponding to the major axes, are used for the training of an ensemble of Convolutional Neural Networks. The panoramic views consist of 3-channel images, containing the Spatial Distribution Map, the Normals' Deviation Map and the magnitude of the Normals' Devation Map Gradient Image. The proposed method aims at capturing feature continuity of 3D models, while simultaneously minimizing data preprocessing via the construction of an augmented image representation. It is extensively tested in terms of classification and retrieval accuracy on two standard large scale datasets: ModelNet and ShapeNet. 1. Introduction 1 In the recent past, convolutional neural networks (CNN) have 2 shown their superiority against humans in computing features, 3 while they are very sensitive to the input representation. In this 4 work an extension of the PANORAMA 3D shape representa-5 tion, previously proposed by our team (Papadakis et al., 2010), 6 is exploited as the input representation to a CNN for computing 7 descriptor features for 3D object classification and retrieval. 8 The 3D models are initially pose normalized using the SYM-9 PAN pose normalization algorithm, (Sfikas et al., 2014) which 10 is based on the use of reflective symmetry on their panoramic 11 view images. Next, an augmented panoramic view is created 12 and used to train the convolutional neural network. This aug-13 mented panoramic view consists of the spatial and orientation 14 components of PANORAMA, (see 3.1.1), along with the mag-15 nitude of the gradient image which is extracted from the ori-16 entation component. A reduction in the size of the augmented 17 panoramic view representation is shown to benefit the training 18 procedure.