Capsule network is a novel architecture to encode the properties and spatial relationships of the feature in the images, which shows encouraging results on image classification. However, the original capsule network is not suitable for some classification tasks that the detected object has complex internal representations. Hence, we propose Multi-Scale Capsule Network, a novel variation of capsule network to enhance the computational efficiency and representation capacity of capsule network. The proposed Multi-Scale Capsule Network consists of two stages. In the first stage the structural and semantic information are obtained by the multi-scale feature extraction. The second stage, we encode the hierarchy of features to multi-dimensional primary capsule. Moreover, we propose an improved dropout to enhance the robustness of capsule network. Experimental results show that our method has competitive performance on FashionMNIST and CIFAR10 datasets.
This paper presents a novel model for video salient object detection called spatiotemporal constrained optimization model (SCOM), which exploits spatial and temporal cues, as well as a local constraint, to achieve a global saliency optimization. For a robust motion estimation of salient objects, we propose a novel approach to modeling the motion cues from optical flow field, the saliency map of the prior video frame and the motion history of change detection, which is able to distinguish the moving salient objects from diverse changing background regions. Furthermore, an effective objectness measure is proposed with intuitive geometrical interpretation to extract some reliable object and background regions, which provided as the basis to define the foreground potential, background potential, and the constraint to support saliency propagation. These potentials and the constraint are formulated into the proposed SCOM framework to generate an optimal saliency map for each frame in a video. The proposed model is extensively evaluated on the widely used challenging benchmark data sets. Experiments demonstrate that our proposed SCOM substantially outperforms the state-of-the-art saliency models.
Recently, deep learning techniques have substantially boosted the performance of salient object detection in still images. However, the salient object detection in videos by using traditional handcrafted features or deep learning features is not fully investigated, probably due to the lack of sufficient manually labeled video data for saliency modeling, especially for the data-driven deep learning. This paper proposes a novel weakly supervised approach to salient object detection in a video, which can learn a robust saliency prediction model by using very limited manually labeled data and a large amount of weakly labeled data that could be easily generated in a supervised approach. Furthermore, we propose a spatiotemporal cascade neural network (SCNN) architecture for saliency modeling, in which two fully convolutional networks are cascaded to evaluate visual saliency from both spatial and temporal cues to lead the optimal video saliency prediction. The proposed approach is extensively evaluated on the widely used challenging datasets, and the experiments demonstrate that our proposed approach substantially outperforms the state-of-the-art salient object detection models. Index Terms-Video saliency, weakly supervised learning, spatiotemporal prior fusion, cascade fully convolutional network I. INTRODUCTION S ALIENT object detection, which aims to identify the objects or regions that are noticeable and mostly attract human attention in an image/video, has become a research focus of computer vision for decades. It is generally as a preprocessing step to support high-level computer vision tasks, such as object segmentation, object recognition, object tracking and content-based video compression. A number of approaches have been proposed to detect salient objects. The recent approaches based on deep Convolutional Neural Networks (CNNs), e.g., [1]-[3], have substantially improved
Renal fibrosis is associated with the reduction in the functional renal parenchyma and in most cases progresses to end-stage kidney failure, a devastating condition that requires lifelong dialysis or kidney transplantation. However, due to the extreme complexity in the pathogenesis of renal fibrosis and our limited knowledge, therapeutic options for renal fibrosis in the clinical setting are still scarce and often ineffective. Hence, further studies on the molecular mechanisms underlying renal fibrosis are compellingly needed. Multiple miRNAs have demonstrated to participate in kidney diseases in a TGF-β dependent or independent manner, but there is very little known about miR-155-5p on renal fibrosis. In the present study, we firstly explored the expression level and functions of miR-155-5p in the setting of renal fibrosis. Our research revealed that miR-155-5p is highly expressed in kidney tissues from patients and unilateral ureteral obstruction (UUO) rat models, and miR-155-5p knockdown significantly blocks renal fibrosis both in vivo and in vitro. In mechanism, our data demonstrate that miR-155-5p promotes renal fibrosis by increasing the phosphorylated activation of STAT3 via targeting SOCS1/6. Altogether, our findings highlight a miR-155-5p/SOCS/STAT3 axis in the pathogenesis of renal fibrosis, which may provide promising therapeutic targets for clinical prevention of this disease.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.