No abstract
Diabetic nephropathy (DN) as the primary cause of end-stage kidney disease is a common complication of diabetes. Recent researches have shown the activation of nuclear factor kappa light-chain enhancer of activated B cells (NF-κB) and NACHT, LRR and PYD domain-containing protein 3 (NLRP3) inflammasome are associated with inflammation in the progression of DN, but the exact mechanism is unclear. Long noncoding RNAs (lncRNAs) have roles in the development of many diseases including DN. However, the relationship between lncRNAs and inflammation in DN remains largely unknown. Our previous study has revealed that 14 lncRNAs are abnormally expressed in DN by RNA sequencing and real-time quantitative PCR (qRT-PCR) in the renal tissues of db/db DN mice. In this study, these lncRNAs were verified their expressions by qRT-PCR in mesangial cells (MCs) cultured under high- and low-glucose conditions. Twelve lncRNAs displayed the same expressional tendencies in both renal tissues and MCs. In particular, long intergenic noncoding RNA (lincRNA)-Gm4419 was the only one associating with NF-κB among these 12 lncRNAs by bioinformatics methods. Moreover, Gm4419 knockdown could obviously inhibit the expressions of pro-inflammatory cytokines and renal fibrosis biomarkers, and reduce cell proliferation in MCs under high-glucose condition, whereas overexpression of Gm4419 could increase the inflammation, fibrosis and cell proliferation in MCs under low-glucose condition. Interestingly, our results showed that Gm4419 could activate the NF-κB pathway by directly interacting with p50, the subunit of NF-κB. In addition, we found that p50 could interact with NLRP3 inflammasome in MCs. In conclusion, our findings suggest lincRNA-Gm4419 may participate in the inflammation, fibrosis and proliferation in MCs under high-glucose condition through NF-κB/NLRP3 inflammasome signaling pathway, and may provide new insights into the regulation of Gm4419 during the progression of DN.
Dense video captioning is a newly emerging task that aims at both localizing and describing all events in a video. We identify and tackle two challenges on this task, namely, (1) how to utilize both past and future contexts for accurate event proposal predictions, and (2) how to construct informative input to the decoder for generating natural event descriptions. First, previous works predominantly generate temporal event proposals in the forward direction, which neglects future video context. We propose a bidirectional proposal method that effectively exploits both past and future contexts to make proposal predictions. Second, different events ending at (nearly) the same time are indistinguishable in the previous works, resulting in the same captions. We solve this problem by representing each event with an attentive fusion of hidden states from the proposal module and video contents (e.g., C3D features). We further propose a novel context gating mechanism to balance the contributions from the current event and its surrounding contexts dynamically. We empirically show that our attentively fused event representation is superior to the proposal hidden states or video contents alone. By coupling proposal and captioning modules into one unified framework, our model outperforms the state-of-the-arts on the ActivityNet Captions dataset with a relative gain of over 100% (Meteor score increases from 4.82 to 9.65).
A common limitation of neuroimaging studies is their small sample sizes. To overcome this hurdle, the Enhancing Neuro Imaging Genetics through Meta-Analysis (ENIGMA) Consortium combines neuroimaging data from many institutions worldwide. However, this introduces heterogeneity due to different scanning devices and sequences. ENIGMA projects commonly address this heterogeneity with random-effects meta-analysis or mixed-effects mega-analysis. Here we tested whether the batch adjustment method, ComBat, can further reduce site-related heterogeneity and thus increase statistical power. We conducted random-effects meta-analyses, mixed-effects mega-analyses and ComBat mega-analyses to compare cortical thickness, surface area and subcortical volumes between 2897 individuals with a diagnosis of schizophrenia and 3141 healthy controls from 33 sites. Specifically, we compared the imaging data between individuals with schizophrenia and healthy controls, covarying for age and sex. The use of ComBat substantially increased the statistical significance of the findings as compared to random-effects meta-analyses. The findings were more similar when comparing ComBat with mixed-effects mega-analysis, although ComBat still slightly increased the statistical significance. ComBat also showed increased statistical power when we repeated the analyses with fewer sites. Results were nearly identical when we applied the ComBat harmonization separately for cortical thickness, cortical surface area and subcortical volumes. Therefore, we recommend applying the ComBat function to attenuate potential effects of site in ENIGMA projects and other multi-site structural imaging work. We provide easy-to-use functions in R that work even if imaging data are partially missing in some brain regions, and they can be trained with one data set and then applied to another (a requirement for some analyses such as machine learning).
Recently, much advance has been made in image captioning, and an encoder-decoder framework has been adopted by all the stateof-the-art models. Under this framework, an input image is encoded by a convolutional neural network (CNN) and then translated into natural language with a recurrent neural network (RNN). The existing models counting on this framework merely employ one kind of CNNs, e.g., ResNet or Inception-X, which describe image contents from only one specific view point. Thus, the semantic meaning of an input image cannot be comprehensively understood, which restricts the performance of captioning. In this paper, in order to exploit the complementary information from multiple encoders, we propose a novel Recurrent Fusion Network (RFNet) for tackling image captioning. The fusion process in our model can exploit the interactions among the outputs of the image encoders and then generate new compact yet informative representations for the decoder. Experiments on the MSCOCO dataset demonstrate the effectiveness of our proposed RFNet, which sets a new state-of-the-art for image captioning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.