Recently, many studies have been carried out on model compression to handle the high computational cost and high memory footprint brought by the implementation of deep neural networks. In this paper, model compression of convolutional neural networks is constructed as a multiobjective optimization problem with two conflicting objectives, reducing the model size and improving the performance. A novel structured pruning method called Conventional-based and Evolutionary Approaches Guided Multiobjective Pruning (CEA-MOP) is proposed to address this problem, where the power of conventional pruning methods is effectively exploited for the evolutionary process. A delicate balance in pruning rate and model accuracy has been automated achieved by a multiobjective optimization evolutionary model. First, an ensemble framework integrates pruning metrics to establish a codebook for further evolutionary operations. Then, an efficient coding method is developed to shorten the length of chromosome, thus ensuring its superior scalability. Finally, sensitivity analysis is automatically carried out to determine the upper bound of pruning rate for each layer. Notably, on CIFAR-10, CEA-MOP reduces more than 50% FLOPs on ResNet-110 and improves the relative accuracy. Moreover, on ImageNet, CEA-MOP reduces more than 50% FLOPs on ResNet-101 with negligible top-1 accuracy drop.