The K ϩ channel pore-forming subunit Kv4.3 is expressed in a subset of nonpeptidergic nociceptors within the dorsal root ganglion (DRG), and knockdown of Kv4.3 selectively induces mechanical hypersensitivity, a major symptom of neuropathic pain. K ϩ channel modulatory subunits KChIP1, KChIP2, and DPP10 are coexpressed in Kv4.3 ϩ DRG neurons, but whether they participate in Kv4.3-mediated pain control is unknown. Here, we show the existence of a Kv4.3/KChIP1/KChIP2/DPP10 complex (abbreviated as the Kv4 complex) in the endoplasmic reticulum and cell surface of DRG neurons. After intrathecal injection of a gene-specific antisense oligodeoxynucleotide to knock down the expression of each component in the Kv4 complex, mechanical hypersensitivity develops in the hindlimbs of rats in parallel with a reduction in all components in the lumbar DRGs. Electrophysiological data further indicate that the excitability of nonpeptidergic nociceptors is enhanced. The expression of all Kv4 complex components in DRG neurons is downregulated following spinal nerve ligation (SNL). To rescue Kv4 complex downregulation, cDNA constructs encoding Kv4.3, KChIP1, and DPP10 were transfected into the injured DRGs (defined as DRGs with injured spinal nerves) of living SNL rats. SNL-evoked mechanical hypersensitivity was attenuated, accompanied by a partial recovery of Kv4.3, KChIP1, and DPP10 surface levels in the injured DRGs. By showing an interdependent regulation among components in the Kv4 complex, this study demonstrates that K ϩ channel modulatory subunits KChIP1, KChIP2, and DPP10 participate in Kv4.3-mediated mechanical pain control. Thus, these modulatory subunits could be potential drug targets for neuropathic pain.
We demonstrate a reinforcement learning agent which uses a compositional recurrent neural network that takes as input an LTL formula and determines satisfying actions. The input LTL formulas have never been seen before, yet the network performs zero-shot generalization to satisfy them. This is a novel form of multi-task learning for RL agents where agents learn from one diverse set of tasks and generalize to a new set of diverse tasks. The formulation of the network enables this capacity to generalize. We demonstrate this ability in two domains. In a symbolic domain, the agent finds a sequence of letters in that are accepted. In a Minecraft-like environment, the agent finds a sequence of actions that conform to the formula. While prior work could learn to execute one formula reliably given examples of that formula, we demonstrate how to encode all formulas reliably. This could form the basis of new multitask agents that discover sub-tasks and execute them without any additional training, as well as the agents which follow more complex linguistic commands. The structures required for this generalization are specific to LTL formulas, which opens up an interesting theoretical question: what structures are required in neural networks for zero-shot generalization to different logics?
We demonstrate how a sequence model and a sampling-based planner can influence each other to produce efficient plans and how such a model can automatically learn to take advantage of observations of the environment. Samplingbased planners such as RRT generally know nothing of their environments even if they have traversed similar spaces many times. A sequence model, such as an HMM or LSTM, guides the search for good paths. The resulting model, called DeRRT * , observes the state of the planner and the local environment to bias the next move and next planner state. The neural-networkbased models avoid manual feature engineering by co-training a convolutional network which processes map features and observations from sensors. We incorporate this sequence model in a manner that combines its likelihood with the existing bias for searching large unexplored Voronoi regions. This leads to more efficient trajectories with fewer rejected samples even in difficult domains such as when escaping bug traps. This model can also be used for dimensionality reduction in multi-agent environments with dynamic obstacles. Instead of planning in a high-dimensional space that includes the configurations of the other agents, we plan in a low-dimensional subspace relying on the sequence model to bias samples using the observed behavior of the other agents. The techniques presented here are general, include both graphical models and deep learning approaches, and can be adapted to a range of planners.
Humans are remarkably flexible when understanding new sentences that include combinations of concepts they have never encountered before. Recent work has shown that while deep networks can mimic some human language abilities when presented with novel sentences, systematic variation uncovers the limitations in the language-understanding abilities of networks. We demonstrate that these limitations can be overcome by addressing the generalization challenges in the gSCAN dataset, which explicitly measures how well an agent is able to interpret novel linguistic commands grounded in vision, e.g., novel pairings of adjectives and nouns. The key principle we employ is compositionality: that the compositional structure of networks should reflect the compositional structure of the problem domain they address, while allowing other parameters to be learned end-to-end. We build a general-purpose mechanism that enables agents to generalize their language understanding to compositional domains. Crucially, our network has the same state-of-theart performance as prior work while generalizing its knowledge when prior work does not. Our network also provides a level of interpretability that enables users to inspect what each part of networks learns. Robust grounded language understanding without dramatic failures and without corner cases is critical to building safe and fair robots; we demonstrate the significant role that compositionality can play in achieving that goal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.