Neural network models of memory are notorious for catastrophic interference: Old items are forgotten as new items are memorized (French, 1999;McCloskey & Cohen, 1989). While working memory (WM) in human adults shows severe capacity limitations, these capacity limitations do not reflect neural network style catastrophic interference. However, our ability to quickly apprehend the numerosity of small sets of objects (i.e., subitizing) does show catastrophic capacity limitations, and this subitizing capacity and WM might reflect a common capacity. Accordingly, computational investigations (Knops, Piazza, Sengupta, Eger & Melcher, 2014;Sengupta, Surampudi & Melcher, 2014) suggest that mutual inhibition among neurons can explain both kinds of capacity limitations as well as why our ability to estimate the numerosity of larger sets is limited according to a Weber ratio signature. Based on simulations with a saliency map-like network and mathematical proofs, we provide three results. First, mutual inhibition among neurons leads to catastrophic interference when items are presented simultaneously. The network can remember a limited number of items, but when more items are presented, the network forgets all of them. Second, if memory items are presented sequentially rather than simultaneously, the network remembers the most recent items rather than forgetting all of them. Hence, the tendency in WM tasks to sequentially attend even to simultaneously presented items might not only reflect attentional limitations, but also an adaptive strategy to avoid catastrophic interference. Third, the mean activation level in the network can be used to estimate the number of items in small sets, but it does not accurately reflect the number of items in larger sets. Rather, we suggest that the Weber ratio signature of large number discrimination emerges naturally from the interaction between the limited precision of a numeric estimation system and a multiplicative gain control mechanism.