Standard grammar formalisms are defined without reflection of the incremental, serial and context-dependent nature of language processing; any incrementality must therefore be reflected by independently defined parsing and/or generation techniques, and context-dependence by separate pragmatic modules. This leads to a poor setup for modelling dialogue, with its rich speaker-hearer interaction and high proportion of contextdependent and apparently grammatically ill-formed utterances. Instead, this paper takes an inherently incremental grammar formalism, Dynamic Syntax (DS) , proposes a context-based extension and defines corresponding context-dependent parsing and generation models together with a resulting natural definition of context-dependent well-formedness. These are shown to allow a straightforward model of otherwise problematic dialogue phenomena such as shared utterances, ellipsis and alignment. We conclude that language competence is a capacity for dialogue.
Ever since dialogue modelling first developed relative to broadly Gricean
assumptions about utter-ance interpretation (Clark, 1996), it has remained an open
question whether the full complexity of higher-order intention computation is made use
of in everyday conversation. In this paper we examine the phenomenon of split
utterances, from the perspective of Dynamic Syntax, to further probe the necessity of
full intention recognition/formation in communication: we do so by exploring the extent
to which the interactive coordination of dialogue exchange can be seen as emergent from
low-level mechanisms of language processing, without needing representation by
interlocutors of each other’s mental states, or fully developed intentions as regards
messages to be conveyed. We thus illustrate how many dialogue phenomena can be seen as
direct consequences of the grammar architecture, as long as this is presented within an
incremental, goal-directed/predictive model.
Language use is full of subsentential shifts of context, a phenomenon dramatically illustrated in conversation where non-sentential utterances displaying seamless shifts between speaker/hearer roles appear regularly. The hurdle this poses for standard assumptions is that every local linguistic dependency can be distributed across speakers, with the content of what they are saying and the significance of each conversational move emerging incrementally. Accordingly, we argue that the modelling of a psychologically-realistic grammar necessitates recasting the notion of natural language in terms of our ability for interaction with others and the environment, abandoning the competence-performance dichotomy as standardly envisaged. We sketch
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.