Background and AimsThe effect of antidepressant therapy on Inflammatory Bowel Disease (IBD) remains controversial. This trial aimed to assess whether adding venlafaxine to standard therapy for IBD improved the quality of life (QoL), mental health, and disease activity of patients with IBD with anxious and depressive symptoms.MethodsA prospective, randomized, double-blind, and placebo-controlled clinical trial was conducted. Participants diagnosed with IBD with symptoms of anxiety or depression were randomly assigned to receive either venlafaxine 150 mg daily or equivalent placebo and followed for 6 months. Inflammatory Bowel Disease Questionnaire (IBDQ), Mayo score, Crohn's disease activity index (CDAI), Hospital Anxiety and Depression Scale (HADS), and blood examination were completed before the enrollment, during, and after the follow-up. Mixed linear models and univariate analyses were used to compare groups.ResultsForty-five patients with IBD were included, of whom 25 were randomized to receive venlafaxine. The mean age was 40.00 (SD = 13.12) years old and 25 (55.6%) were male. Venlafaxine showed a significant improvement on QoL (p < 0.001) and disease course (p = 0.035), a greater reduction in HADS (anxiety: p < 0.001, depression: p < 0.001), Mayo scores (p < 0.001), and CDAI (p = 0.006) after 6 months. Venlafaxine had no effect on IL-10 expression, endoscopic scores, relapse rate, and use rate of biologics and corticosteroids, but did reduce serum level of erythrocyte estimation rate (ESR; p = 0.003), C-reactive protein (CRP; p < 0.001) and tumor necrosis factor-α (TNF-α; p = 0.009).ConclusionsVenlafaxine has a significantly beneficial effect on QoL, IBD activity, and mental health in patients with IBD with comorbid anxious or depressive symptoms. (Chinese Clinical Trial Registry, ID: ChiCTR1900021496).
E-commerce platforms usually display a mixed list of ads and organic items in feed. One key problem is to allocate the limited slots in the feed to maximize the overall revenue as well as improve user experience, which requires a good model for user preference. Instead of modeling the influence of individual items on user behaviors, the arrangement signal models the influence of the arrangement of items and may lead to a better allocation strategy. However, most of previous strategies fail to model such a signal and therefore result in suboptimal performance. To this end, we propose Cross Deep Q Network (Cross DQN) to extract the arrangement signal by crossing the embeddings of different items and processing the crossed sequence in the feed. Our model results in higher revenue and better user experience than state-of-the-art baselines in offline experiments. Moreover, our model demonstrates a significant improvement in the online A/B test and has been fully deployed on Meituan feed to serve more than 300 millions of customers.
Exploration is essential for reinforcement learning (RL). To face the challenges of exploration, we consider a reward-free RL framework that completely separates exploration from exploitation and brings new challenges for exploration algorithms. In the exploration phase, the agent learns an exploratory policy by interacting with a reward-free environment and collects a dataset of transitions by executing the policy. In the planning phase, the agent computes a good policy for any reward function based on the dataset without further interacting with the environment. This framework is suitable for the meta RL setting where there are many reward functions of interest. In the exploration phase, we propose to maximize the Renyi entropy over the state-action space and justify this objective theoretically. The success of using Renyi entropy as the objective results from its encouragement to explore the hard-to-reach state-actions. We further deduce a policy gradient formulation for this objective and design a practical exploration algorithm that can deal with complex environments. In the planning phase, we solve for good policies given arbitrary reward functions using a batch RL algorithm. Empirically, we show that our exploration algorithm is effective and sample efficient, and results in superior policies for arbitrary reward functions in the planning phase.
Recently, various auxiliary tasks have been proposed to accelerate representation learning and improve sample efficiency in deep reinforcement learning (RL). However, existing auxiliary tasks do not take the characteristics of RL problems into consideration and are unsupervised. By leveraging returns, the most important feedback signals in RL, we propose a novel auxiliary task that forces the learnt representations to discriminate state-action pairs with different returns. Our auxiliary loss is theoretically justified to learn representations that capture the structure of a new form of state-action abstraction, under which state-action pairs with similar return distributions are aggregated together. In low data regime, our algorithm outperforms strong baselines on complex tasks in Atari games and DeepMind Control suite, and achieves even better performance when combined with existing auxiliary tasks. * Author contribution: Guoqing Liu implemented the algorithm, optimized the code, and analyzed the experiment results. Chuheng Zhang proposed the theoretical framework, designed the algorithm, and proved the theorem. Li Zhao initialized the idea and provided suggestions for the project.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.