Many insects use memories of their visual environment to adaptively drive spatial behaviours. In ants, visual memories are fundamental for navigation, whereby foragers follow long visually guided routes to foraging sites and return to the location of their nest. Whilst we understand the basic visual pathway to the memory centres (Optic Lobes to Mushroom Bodies) involved in the storage of visual information, it is still largely unknown what type of representation of visual scenes underpins view-based navigation in ants. Several experimental studies have shown ants using “higher-order” visual information – that is features extracted across the whole extent of a visual scene – which raises the question as to where these features are computed. One such experimental study showed that ants can use the proportion of a shape experienced left of their visual centre to learn and recapitulate a route, a feature referred to as “fractional position of mass” (FPM). In this work, we use a simple model constrained by the known neuroanatomy and information processing properties of the Mushroom Bodies to explore whether the use of the FPM could be a resulting factor of the bilateral organisation of the insect brain, all the whilst assuming a “retinotopic” view representation. We demonstrate that such bilaterally organised memory models can implicitly encode the FPM learned during training. We find that balancing the “quality” of the memory match across left and right hemispheres allows a trained model to retrieve the FPM defined direction, even when the model is tested with other shapes, as demonstrated by ants. The result is shown to be largely independent of model parameter values, therefore suggesting that some aspects of higher-order processing of a visual scene may be emergent from the structure of the neural circuits, rather than computed in discrete processing modules.