Gastrointestinal disturbances are commonly reported in children with autism and may be associated with compositional changes in intestinal bacteria. In a previous report, we surveyed intestinal microbiota in ileal and cecal biopsy samples from children with autism and gastrointestinal dysfunction (AUT-GI) and children with only gastrointestinal dysfunction (Control-GI). Our results demonstrated the presence of members of the family Alcaligenaceae in some AUT-GI children, while no Control-GI children had Alcaligenaceae sequences. Here we demonstrate that increased levels of Alcaligenaceae in intestinal biopsy samples from AUT-GI children result from the presence of high levels of members of the genus Sutterella. We also report the first Sutterella-specific PCR assays for detecting, quantitating, and genotyping Sutterella species in biological and environmental samples. Sutterella 16S rRNA gene sequences were found in 12 of 23 AUT-GI children but in none of 9 Control-GI children. Phylogenetic analysis revealed a predominance of either Sutterella wadsworthensis or Sutterella stercoricanis in 11 of the individual Sutterella-positive AUT-GI patients; in one AUT-GI patient, Sutterella sequences were obtained that could not be given a species-level classification based on the 16S rRNA gene sequences of known Sutterella isolates. Western immunoblots revealed plasma IgG or IgM antibody reactivity to Sutterella wadsworthensis antigens in 11 AUT-GI patients, 8 of whom were also PCR positive, indicating the presence of an immune response to Sutterella in some children.
This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning. We also provide a dataset of more than 1.39 million instances automatically labeled for politeness to encourage benchmark evaluations on this new task. We design a tag and generate pipeline that identifies stylistic attributes and subsequently generates a sentence in the target style while preserving most of the source content. For politeness as well as five other transfer tasks, our model outperforms the state-of-the-art methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. Additionally, our model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. The data and code is located at https:// github.com/tag-and-generate/ IntroductionPoliteness plays a crucial role in social interaction, and is closely tied with power dynamics, social distance between the participants of a conversation, and gender (Brown et al., 1987;Danescu-Niculescu-Mizil et al., 2013). It is also imperative to use the appropriate level of politeness for smooth communication in conversations (Coppock, 2005), organizational settings like emails (Peterson et al., 2011), memos, official documents, and many other settings. Notably, politeness has also been identified as an interpersonal style which can be decoupled from content (Kang and Hovy, 2019). Motivated by its central importance, in this paper we study the task of converting non-polite sentences to polite sentences while preserving the meaning.Prior work on text style transfer (
This work focuses on building language models (LMs) for code-switched text. We propose two techniques that significantly improve these LMs: 1) A novel recurrent neural network unit with dual components that focus on each language in the code-switched text separately 2) Pretraining the LM using synthetic text from a generative model estimated using the training data. We demonstrate the effectiveness of our proposed techniques by reporting perplexities on a Mandarin-English task and derive significant reductions in perplexity. † † https://nlp.stanford.edu/software/ tagger.shtml
In this work, we present a simple and elegant approach to language modeling for bilingual code-switched text. Since codeswitching is a blend of two or more different languages, a standard bilingual language model can be improved upon by using structures of the monolingual language models. We propose a novel technique called dual language models, which involves building two complementary monolingual language models and combining them using a probabilistic model for switching between the two. We evaluate the efficacy of our approach using a conversational Mandarin-English speech corpus. We prove the robustness of our model by showing significant improvements in perplexity measures over the standard bilingual language model without the use of any external information. Similar consistent improvements are also reflected in automatic speech recognition error rates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.