During sensemaking, users often create external representations to help them make sense of what they know, and what they need to know. In doing so, they necessarily adopt or construct some form of representational language using the tools at hand. By describing such languages implicit in representations we believe that we are better able to describe and differentiate what users do and better able to describe and differentiate interfaces that might support them. Drawing on approaches to the analysis of language, and in particular, Mann and Thompson's Rhetorical Structure Theory, we analyse the representations that users create to expose their underlying 'visual grammar'. We do this in the context of a user study involving evidential reasoning. Participants were asked to address an adapted version of IEEE VAST 2011 mini challenge 3 (interpret a potential terrorist plot implicit in a set of news reports). We show how our approach enables the unpacking of the heterogeneous and embedded nature of user-generated representations and allows us to show how visual grammars evolve and become more complex over time in response to evolving sensemaking needs.