Spontaneous pain and function-associated pain are prevalent symptoms of multiple acute and chronic muscle pathologies. We established mouse models for evaluating spontaneous pain and bite-evoked pain from masseter muscle, and determined the roles of TRPV1 and the contribution of TRPV1- or NK1-dependent nociceptive pathways. Masseter muscle inflammation increased mouse grimace scale (MGS) scores and face wiping behavior which were attenuated by pharmacological or genetic inhibition of TRPV1. Masseter inflammation led to a significant reduction in bite force. Inhibition of TRPV1 only marginally relieved the inflammation-induced reduction of bite force. These results suggest differential extent of contribution of TRPV1 to the two types of muscle pain. However, chemical ablation of TRPV1-expressing nociceptors or chemogenetic silencing of TRPV1-lineage nerve terminals in masseter muscle attenuated inflammation-induced changes in both MGS scores and bite force. Furthermore, ablation of neurons expressing neurokinin 1 (NK1) receptor in trigeminal subnucleus caudalis also prevented both types of muscle pain. Our results suggest that TRPV1 differentially contribute to spontaneous pain and bite-evoked muscle pain, but TRPV1-expressing afferents and NK1-expressing second order neurons commonly mediate both types of muscle pain. Therefore, manipulation of the nociceptive circuit may provide a novel approach for management of acute or chronic craniofacial muscle pain.
Many natural language processing tasks solely rely on sparse dependencies between a few tokens in a sentence. Soft attention mechanisms show promising performance in modeling local/global dependencies by soft probabilities between every two tokens, but they are not effective and efficient when applied to long sentences. By contrast, hard attention mechanisms directly select a subset of tokens but are difficult and inefficient to train due to their combinatorial nature. In this paper, we integrate both soft and hard attention into one context fusion model, "reinforced self-attention (ReSA)", for the mutual benefit of each other. In ReSA, a hard attention trims a sequence for a soft self-attention to process, while the soft attention feeds reward signals back to facilitate the training of the hard one. For this purpose, we develop a novel hard attention called "reinforced sequence sampling (RSS)", selecting tokens in parallel and trained via policy gradient. Using two RSS modules, ReSA efficiently extracts the sparse dependencies between each pair of selected tokens. We finally propose an RNN/CNN-free sentence-encoding model, "reinforced self-attention network (ReSAN)", solely based on ReSA. It achieves state-of-the-art performance on both Stanford Natural Language Inference (SNLI) and Sentences Involving Compositional Knowledge (SICK) datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.