In recent years, deep learning has been widely applied in communications and achieved remarkable performance improvement. Most of the existing works are based on data-driven deep learning, which requires a significant amount of training data for the communication model to adapt to new environments and results in huge computing resources for collecting data and retraining the model. In this paper, we will significantly reduce the required amount of training data for new environments by leveraging the learning experience from the known environments. Therefore, we introduce few-shot learning to enable the communication model to generalize to new environments, which is realized by an attention-based method. With the attention network embedded into the deep learningbased communication model, environments with different power delay profiles can be learnt together in the training process, which is called the learning experience. By exploiting the learning experience, the communication model only requires few pilot blocks to perform well in the new environment. Through an example of deep-learning-based channel estimation, we demonstrate that this novel design method achieves better performance than the existing data-driven approach designed for few-shot learning.
One of the fundamental limitations of Deep Neural Networks (DNN) is its inability to acquire and accumulate new cognitive capabilities. When some new data appears, such as new object classes that are not in the prescribed set of objects being recognized, a conventional DNN would not be able to recognize them due to the fundamental formulation that it takes. The current solution is typically to re-design and re-learn the entire network, perhaps with a new configuration, from a newly expanded dataset to accommodate new knowledge. This process is quite different from that of a human learner. In this paper, we propose a new learning method named Accretionary Learning (AL) to emulate human learning, in that the set of objects to be recognized may not be pre-specified. The corresponding learning structure is modularized, which can dynamically expand to register and use new knowledge. During accretionary learning, the learning process does not require the system to be totally re-designed and re-trained as the set of objects grows in size. The proposed DNN structure does not forget previous knowledge when learning to recognize new data classes. We show that the new structure and the design methodology lead to a system that can grow to cope with increased cognitive complexity while providing stable and superior overall performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.