The effectiveness, robustness, and flexibility of memory and learning constitute the very essence of human natural intelligence, cognition, and consciousness. However, currently accepted views on these subjects have, to date, been put forth without any basis on a true physical theory of how the brain communicates internally via its electrical signals. This lack of a solid theoretical framework has implications not only for our understanding of how the brain works, but also for wide range of computational models developed from the standard orthodox view of brain neuronal organization and brain network derived functioning based on the Hodgkin–Huxley ad-hoc circuit analogies that have produced a multitude of Artificial, Recurrent, Convolution, Spiking, etc., Neural Networks (ARCSe NNs) that have in turn led to the standard algorithms that form the basis of artificial intelligence (AI) and machine learning (ML) methods. Our hypothesis, based upon our recently developed physical model of weakly evanescent brain wave propagation (WETCOW) is that, contrary to the current orthodox model that brain neurons just integrate and fire under accompaniment of slow leaking, they can instead perform much more sophisticated tasks of efficient coherent synchronization/desynchronization guided by the collective influence of propagating nonlinear near critical brain waves, the waves that currently assumed to be nothing but inconsequential subthreshold noise. In this paper we highlight the learning and memory capabilities of our WETCOW framework and then apply it to the specific application of AI/ML and Neural Networks. We demonstrate that the learning inspired by these critically synchronized brain waves is shallow, yet its timing and accuracy outperforms deep ARCSe counterparts on standard test datasets. These results have implications for both our understanding of brain function and for the wide range of AI/ML applications.