D-LTAG is a discourse-level extension of lexicalized tree-adjoining grammar (LTAG), in which discourse syntax is projected by different types of discourse connectives and discourse interpretation is a product of compositional rules, anaphora resolution, and inference. In this paper, we present a D-LTAG extension of ongoing work on an LTAG syntax-semantic interface. First, we show how predicate-argument semantics are computed for standard, 'structural' discourse connectives. These are connectives that retrieve their semantic arguments from their D-LTAG syntactic tree. Then we focus on discourse connectives that occur syntactically as (usually) fronted adverbials. These connectives do not retrieve both their semantic arguments from a single D-LTAG syntactic tree. Rather, their predicate-argument structure and interpretation distinguish them from structural connectives as well as from other adverbials that do not function as discourse connectives. The unique contribution of this paper lies in showing how compositional rules and anaphora resolution interact within the D-LTAG syntaxsemantic interface to yield their semantic interpretations, with multi-component syntactic trees sometimes being required.
We present an evaluation of a spoken dialogue system that detects and adapts to user disengagement and uncertainty in real-time. We compare this version of our system to a version that adapts to only user disengagement, and to a version that ignores user disengagement and uncertainty entirely. We find a significant increase in task success when comparing both affectadaptive versions of our system to our nonadaptive baseline, but only for male users.
Full text discourse parsing relies on texts comprehensively annotated with discourse relations. To this end, we address a significant gap in the inter-sentential discourse relations annotated in the Penn Discourse Treebank (PDTB), namely the class of cross-paragraph implicit relations, which account for 30% of inter-sentential relations in the corpus. We present our annotation study to explore the incidence rate of adjacent vs. non-adjacent implicit relations in cross-paragraph contexts, and the relative degree of difficulty in annotating them. Our experiments show a high incidence of non-adjacent relations that are difficult to annotate reliably, suggesting the practicality of backing off from their annotation to reduce noise for corpusbased studies. Our resulting guidelines follow the PDTB adjacency constraint for implicits while employing an underspecified representation of non-adjacent implicits, and yield 62% inter-annotator agreement on this task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.