“…In line with such view, we argue that co-speech gestures, which are spatial in nature, convey information that can be easily incorporated into the text/discourse mental model because mental models themselves are spatially organized (Knauff & Johnson-Laird, 2002); moreover, gestures are cast in the same non-discrete representational format as mental models. Previous studies have shown that co-speech gestures performed by the speaker facilitate the construction of an articulated mental model by the listener (Cutica & Bucciarelli, 2008, 2011; this also holds for oral deaf individuals trained to lip-read: Vendrame, Cutica & Bucciarelli, 2010). A better mental model results in a greater number of correct recollections and correct inferences drawn from the information explicitly contained in the discourse, and poorer retention of surface information (verbatim).…”