Storing memory for molecular recognition is an efficient strategy for responding to external stimuli. Biological processes use different strategies to store memory. In the olfactory cortex, synaptic connections form when stimulated by an odor, and establish distributed memory that can be retrieved upon re-exposure. In contrast, the immune system encodes specialized memory by diverse receptors that recognize a multitude of evolving pathogens. Despite the mechanistic differences between the olfactory and the immune memory, these systems can still be viewed as different information encoding strategies. Here, we present a theoretical framework with artificial neural networks to characterize optimal memory strategies for both static and dynamic (evolving) patterns. Our approach is a generalization of the energy-based Hopfield model in which memory is stored as a network's energy minima. We find that while classical Hopfield networks with distributed memory can efficiently encode a memory of static patterns, they are inadequate against evolving patterns. To follow an evolving pattern, we show that a distributed network should use a higher learning rate, which in turn, can distort the energy landscape associated with the stored memory attractors. Specifically, narrow connecting paths emerge between memory attractors, leading to misclassification of evolving patterns. We demonstrate that compartmentalized networks with specialized subnetworks are the optimal solutions to memory storage for evolving patterns. We postulate that evolution of pathogens may be the reason for the immune system to encoded a focused memory, in contrast to the distributed memory used in the olfactory cortex that interacts with mixtures of static odors.