“…To mitigate forgetting and facilitate forward knowledge transfer, replay-based methods (Lopez-Paz & Ranzato, 2017;Shin et al, 2017;Choi et al, 2021) stores some old samples in the memory, and expansion-based methods (Rusu et al, 2016;Yoon et al, 2017;2019) expand the model structure to accommodate incoming knowledge. However, these methods require either extra memory buffers (Parisi et al, 2019) or a growing network architecture as new tasks continually arrive (Kong et al, 2022), which are always computationally expansive (De Lange et al, 2021). Thus, promoting performance within a fixed network capacity remains challenging.…”