With the rapid development of wearable cameras, a massive collection of egocentric video for first-person visual perception becomes available. Using egocentric videos to predict first-person activity faces many challenges, including limited field of view (FoV), occlusions, and unstable motions. Observing that sensor data from wearable devices facilitates human activity recognition (HAR), activity recognition using multi-modal data is attracting increasing attention. However, the deficiency of related dataset hinders the development of multi-modal deep learning for egocentric activity recognition. Nowadays, deep learning in real world has led to a focus on continual learning that often suffers from catastrophic forgetting. But the catastrophic forgetting problem of continual learning for egocentric activity recognition, especially in the context of multiple modalities, remains unexplored due to unavailability of dataset. In order to assist this research, in this paper, we present a multi-modal egocentric activity dataset for continual learning named UESTC-MMEA-CL, which is collected by self-developed glasses integrating a firstperson camera and wearable sensors. It contains synchronized data of videos, accelerometers, and gyroscopes, for 32 types of daily activities, performed by 10 participants wearing the glasses. The collection device and process of our dataset are described. Its class types and scale are compared with other publicly available multi-modal datasets for egocentric activity recognition. The statistical analysis of the sensor data is given to show the auxiliary effects for different behaviors. And results of egocentric activity recognition are reported when using separately, and jointly, three modalities: RGB, acceleration, and gyroscope, on a base multi-modal network architecture. To explore the catastrophic forgetting in continual learning tasks on UESTC-MMEA-CL, four baseline methods are extensively evaluated with different multi-modal combinations. We hope the UESTC-MMEA-CL dataset can promote future studies on continual learning for firstperson activity recognition in wearable applications. Our dataset will be released soon.