To understand human emotional states, local activities in various regions of the cerebral cortex and the interactions among different brain regions must be considered. This paper proposes a hierarchical emotional context feature learning model that improves multichannel electroencephalography (EEG)-based emotion recognition by learning spatiotemporal EEG features from a local brain region to a global brain region. The proposed method comprises a regional brain-level encoding module, a global brain-level encoding module, and a classifier. First, multichannel EEG signals grouped into nine regions based on the functional role of the brain are input into a regional brain-level encoding module to learn local spatiotemporal information. Subsequently, the global brain-level encoding module improved emotional classification performance by integrating local spatiotemporal information from various brain regions to learn the global context features of brain regions related to emotions. Next, we applied a two-layer bidirectional gated recurrent unit (BGRU) with self-attention to the regional brain-level module and a one-layer BGRU with self-attention to the global brain-level module. Experiments were conducted using three datasets to evaluate the EEG-based emotion recognition performance of the proposed method. The results proved that the proposed method achieves superior performance by reflecting the characteristics of multichannel EEG signals better than state-of-the-art methods.