“…Here we would like to discuss the advantage and necessity of continual learning based on specified models during the period of emerging large models. Although recent [26], [34], [35], [50], [51] Generative-replay Generative-data Generative-feature without storing real data, customized data replay heavy reliance on generative quality, high space complexity [28], [29], [32], [36], [52] Self-supervised Contrastive-learning Pseudo-labeling Foundation-model Driven strong adaptability, exemplar-memory free high training cost, hard to convergence [27], [41], [48], [53], [54] Regularization-based [39], [40], [43], [44], [55] Dynamic-architecture Parameter Allocation Architecture Decomposition Modular Network high model flexibility, high adaptability to diverse data network parameters gradually increases, high space complexity [30], [46], [56], [57], [58] large-model forms [59], [60] achieve fair zero-shot learning ability, they often lack the ability to classify targets with semantic understanding like humans. Another significant concern is cost.…”