Objective. Although emotion recognition has been studied for decades, a more accurate classification method that requires less computing is still needed. At present, in many studies, EEG features are extracted from all channels to recognize emotional states, however, there is a lack of an efficient feature domain that improves classification performance and reduces the number of EEG channels. Approach. In this study, a continuous wavelet transform (CWT)-based feature representation of multi-channel EEG data is proposed for automatic emotion recognition. In the proposed feature, the time-frequency domain information is preserved by using CWT coefficients. For a particular EEG channel, each CWT coefficient is mapped into a strength-to-entropy component ratio to obtain a 2D representation. Finally, a 2D feature matrix, namely CEF2D, is created by concatenating these representations from different channels and fed into a deep convolutional neural network architecture. Based on the CWT domain energy-to-entropy ratio, effective channel and CWT scale selection schemes are also proposed to reduce computational complexity. Main results. Compared with previous studies, the results of this study show that valence and arousal classification accuracy has improved in both 3-class and 2-class cases. For the 2-class problem, the average accuracies obtained for valence and arousal dimensions are 98.83% and 98.95%, respectively, and for the 3-class, the accuracies are 98.25% and 98.68%, respectively. Significance. Our findings show that the entropy-based feature of EEG data in the CWT domain is effective for emotion recognition. Utilizing the proposed feature domain, an effective channel selection method can reduce computational complexity.